As a iOS developer, as in previous years, I sort out where I might need to focus.
There are two major frameworks for adding new SDK, namely, Core ML that is responsible for simplifying and integrating machine learning, and ARKit for creating augmented reality (AR) applications.
- Core ML
, since the advent of AlphaGo, deep learning has no doubt become an industry hot spot. And Google also shifted its strategy from Mobile-first to AI-first last year. Can be said that the line of Internet companies in almost all betting on AI, now machine learning, especially deep learning is one of the most promising road.
if you are not familiar with machine learning, I think I can be here to do some brief introduction of “transgression”. You can put the machine learning model as a black box function, you are given some input (which may be a paragraph of text, or an image), this function will give specific output (for example, names, this text or pictures of the store brand). At first, this model can be very crude and can not give the right results, but you can train and even improve the model with a lot of existing data and the correct results. In the model used enough optimization, and training load is large enough, the accuracy of the black box model will not only on the training data is high, often also can give the actual input of the unknown right back. Such a model is a well trained model that can be used in practice.
on machine learning model training is a very important work, Core ML’s role will be more of an already trained model is converted into the iOS form that can be understood, and the new data “feeding” model, gets the output. Abstract and modeling are not difficult, but the improvement and training of the model can be said to be worth studying for a lifetime, and the readers of this article may not be able to catch cold. Fortunately, Apple provides a range of tools for converting machine learning models into Core ML understandable forms. With this, you can easily use the model that your predecessors have trained in your iOS app. This before you may need to find their own model, and then write some C++ code to call, but it is difficult to use iOS equipment GPU performance and Metal (unless you write some shader to matrix operations). Core ML will use the model’s threshold a lot less.
Core ML is behind the iOS visual recognition of the Vision framework and semantic analysis related to API in Foundation. Ordinary developers can benefit directly from these high-level API, such as face pictures or text recognition. This part also existed in previous versions of SDK, but in iOS 11 SDK, they were focused on the new framework and opened up some of the more concrete and underlying controls. For example, you can use high-level interfaces in Vision, but specify the underlying models at the same time. This brings new possibilities to iOS’s computer vision. The efforts of
, Google, or Samsung on Android AI are mostly integrated services in their own applications. By contrast, Apple has given more options to third party developers based on its own ecological and hardware controls.
- The demo of AR on ARKit
Keynote is the only bright spot. IOS SDK 11 Apple for developers, especially related to AR developers to bring a great gift, that is ARKit. AR can be said is not what new technologies like Pok mon Go this game also proved VR’s potential in the game. But in addition to IP and freshness, Pok mon and Go that are not eligible to represent the potential of VR technology. Field demonstrations like we demonstrated the possibility that, in the rough, ARKit uses single lenses and gyroscopes to perform fairly well in plane recognition and stabilization of virtual objects. Almost certainly, so do the first, do the best Apple at this moment seems to return to the stage on
ARKit greatly reduces the ordinary developers to play AR and Apple threshold, at this stage to compete with the VR option. You can imagine more similar to the Pok mon Go AR (virtual reality game pet with what is probably the most likely to think in the world under the help of ARKit) and SceneKit iPad Pro, even in the existing skills do like AR film which can fully display the multimedia may also is no longer a simple dream.
, in contrast, is a API that is not very complicated. The View is almost as an extension of SceneKit, and in the real world as it is already system help, developers need to do is probably the virtual objects in the proper position of the screen, and let the interaction between objects. The use of Core ML to identify and interact with the actual object in the camera, you can also say that all kinds of special camera or photography app full of imagination. Xcode
- Editor and compiler
, speed is life, and the developer’s life is wasted on waiting for compilation. Swift has been well received since its inception, but slow compilation speed, sometimes no syntax hints, no refactoring, etc., the lack of tool chain, has become the most important black spot. Xcode 9 editor has been rewritten to support the reconstruction of the Swift code (although still based), VCS will be referred to a more important position, and add the GitHub integration, deployment and debugging can be carried out with the wireless lan.
, the new compiler system is rewritten using Swift, after some comparison, the compiler speed really has a big promotion. The compile time of the company project that is being done is shortened from three minutes and a half to two minutes, more or less, it can be said quite obvious. The index system in
Xcode 9 also uses the new engine, which is said to have a maximum search speed of 50 times in large projects. However, it may be because the project I participated in is not large enough, this experience is not obvious. The project is still facing the situation by Swift code. It might be that the indexing system and the compilation system are not well coordinated. After all, the beta version of the software, perhaps, should give the Xcode team more time (though it may eventually be the case). Because the
compiler provides compatibility with the Swift 4 Swift 3 (Swift version can be set in the Build Setting), so what if there is no accident, I might use 9 beta Xcode after the daily development, then in the release package and then cut back to Xcode 8. After all, it takes a minute and a half to compile a whole piece of work, and that’s a tempting thing.
‘s beta version is surprisingly good this time, perhaps because the last two years have been minor improvements, allowing the Apple software team to have relatively ample time to develop the results In short, Xcode 9 beta is now working very well.
- Named Color
, this is a personal favorite change. Now you can add color to the xcassets, and then refer to the color in the code or IB. It’s about this: when
looks like using IB to build UI, one of the headaches is that designers say we don’t want to change the subject color. You probably need to look around for this color to replace. But now you just have to change it in xcassets, and you’ll be able to respond to everything in the IB.
- Other notable changes,
, are all small changes, simple browsing, listing what I think is worth mentioning, and links to references.
drag and drop – a very standard set of iOS API, and the iOS system helps us to handle most of the work, and the developers almost only need to process the results. UITextView
and native support for drag, drag the
UITableView has a series of special delegate to show the occurrence of drag and end. You can also define drag behavior for any UIView
, FileProvider, and FileProviderUI – provide an interface similar to the Files app that allows you to access files on your user’s device or in the cloud. I’m sure it will be the standard for later document related class app.
no longer supports 32 bit app – although 32 bit app can still be run in beta 1, Apple explicitly states that it will be canceled in subsequent iOS 11 beta. So if you want your program to run on a iOS 11 device, 64 bit recompile is a must.
DeviceCheck – the developer who tracks users every day with AD ID now has a better choice (of course, it’s for serious business). DeviceCheck allows you to communicate with your Apple server through your server and set up two bit data for a single device. Simply, you create a token on the device with DeviceCheck API, this token to your server and then again by the server and Apple’s own API communication, to update or query the value of the equipment. These two bit data are used to track users such as whether they have received rewards or not.
From meow God blog https://onevcat.com/2017/06/ios-11-sdk/