Apple WWDC 2019: Everything you need to know Apple WWDC 2019 full review: All you need to know
Skip to main content

Select your location

Apple WWDC Auditorium View

Apple WWDC 2019: Everything you need to know

  • 06 April 2019

Apple WWDC 2019, Apple’s annual World Wide Developer Conference, began yesterday, revealing what we can expect for iOS 13, macOS Catalina, watchOS 6, some limited updates to tvOS - as well as an entirely new iPad OS. 

Further exciting news came on the development front, with the announcement of SwiftUI and cross-platform, as well as new capabilities within Apple’s AR and MLKits.  

Below we’ll talk a bit about some of the most exciting news, and the opportunities we see for the future at this year’s Apple WWDC.

iPad OS

Apple has noticeably beefed up the iPad’s capabilities, spinning off a bespoke iPad OS focusing on productivity features that make use of its larger screen. Multi - window displays are now supported in the same app, allowing you to, for example, reference one email alongside another. More granular developments include quicker file and document navigation and support for richer fonts in email, as well as USB plug in.

For when you absolutely need a desktop, ‘Sidecar’ effectively turns an iPad into a second screen, and supports seamless handoffs between a Mac where you may have started a task, to an iPad to continue. And a third announcement, Project Catalyst will support cross-platform development, giving developers the tools to easily port apps developed for iPad to Macs, as well as iPhones, and tap into the native features of each.

Why are we excited?

The iPad is evolving into something with the potential to replace a whole host of existing enterprise productivity tools. Its balance of portability, usability and power gives it a much broader range of potential use cases than mobile or laptop - particularly in the enterprise.

Easy cross platform app development promises a much more powerful ecosystem of tools which speak to each other increasingly intuitively. As well as a greater range of use cases, this enables flexible workflows, taking cues from user preference rather than pure necessity. In our Connected Organisations report, we’ve outlined some of the benefits of creating a smarter, more connected workplace.

Apple Watch

WatchOS 6 allows the Apple Watch to do more independent of the iPhone, with the launch of its own app store and the ability to create apps specifically for the watch.

Some neat new watch use cases include a calculator app which will split bills and will work out tips and ‘taptics’, which give the user a gentle pulse to indicate the passing of the hour.

Why we’re excited?

Developing bespoke products for voice and wearable interfaces significantly opens up the range of use cases and interactions. Freeing up developers from being ancillary to apps on people’s homescreens will allow them to create propositions that solve for use cases a wearable can uniquely solve. This also continues the wearables’ long march to being a more independent device - the watch is likely a key component in the post-iPhone ecosystem of the future.

Health

WatchOS developments also see Apple continuing to add to its health tracking features. After their roundly mocked continued omission of it, they are finally adding a menstrual (and optional) fertility tracking component. Long-term and comparative tracking of exercise and fitness levels at far more granular detail than has previously been offered, will enable much better tracking and coaching opportunities to improve people’s health.

Why are we excited?

The Apple Watch represents a natural home for many health and fitness applications, its discrete identity has some interesting implications. An app designed specifically for Apple Watch will be able to leverage the range of biometrics supported by the platform in a way that iPhone or iPad isn’t able to. And we really haven’t scratched the surface of real medical use cases for these devices. With increasingly granular data and medical evidence for the value of these devices, we’re hoping to see more and more health organisations making use of them.

SwiftUI

Undoubtedly a highlight for developers was the announcement of SwiftUI. SwiftUI is a framework built to make coding with Apple’s programming language even faster and more immediately interactive, featuring some drag-and-drop elements. It relieves developers of some of the most time-consuming, manual coding tasks to focus on unlocking higher value features. It also makes things like simple prototyping faster, and more accessible to non-developers.

Why are we excited?

Lowering the effort required for simpler tasks frees up developer time to spend on tougher, more valuable challenges. But it also means a lower effort to innovation, by enabling faster prototype and proof of concept builds. Lightweight initial builds enable a tighter feedback loop, with products built in an increasingly lean and efficient way as a result - and enabling you to try new things with lower risk.

Machine Learning

Among other enhancements to their machine learning toolkit, for the first time Core ML 3 will allow developers with no knowledge of Machine Learning to use its preset components to build a range of models directly into apps for on-device training, specific to each user. 

Core ML capabilities are powering a range of performance and interface improvements in Apple’s own services. tv OS will gauge what kind of film you might fancy watching based on your viewing history and serving you related content, much like Netflix’s recommendation engine. Apple’s native Photo app will use ML classification to organise images rather than leaving users to trawl through increasingly voluminous albums. Elsewhere, HomePod’s voice recognition will be used to identify and deliver personalised content  - their calendar, reminders, notes - to each member of the household.

Why are we excited?

Core ML 3 is a powerful existing tool which can be easily embedded into apps to create deeply personalised experiences. Device-specific training means users drive their own interactions and will prompt more repeat and more rewarding interactions with apps. In addition, it strengthens any recommendations with the weight of the user’s own experience and history.

Accessibility - voice

Apple’s main accessibility news doubles up as another use case for its on-device ML functionality. Voice Control - allowing users to control their devices entirely by voice - employs Siri’s speech recognition tech. It is entirely personalised, only training and accepting a single user’s voice cues.

Why are we excited?

We make better apps for a range of users when we design for accessibility in mind - whether their impairment is permanent, ie. they are partially-sighted, or situational, ie. they’re working in poor light conditions, or need to do something with their hands full. Any tool we can make use of to make our products available to a wider audience is a valuable tool. Find out more about how our teams took an accessibility-first approach to building the new groceries app for Tesco.

Augmented Reality

As in previous years, Apple used a demo to illustrate its increasingly sophisticated suite of AR tools. ARKit 3, RealityKit, and Reality Composer come together to form a set of tools to make AR a more practical and easily applied proposition for developers. Real-time gesture tracking and human occlusion - allowing virtual objects to be rendered in front or behind a person relative to their position - were some of the impressive new features debuted.

Why are we excited?

AR has so far delivered few breakout experiences beyond games. We’re starting to see some really interesting use cases emerge - IKEA was an early adopter for furniture ‘showrooming’, and it’s now building on that to add a full ecommerce offering. The building blocks Apple are putting in place bring us closer to seeing hardware from them sooner than later. If it frees users up from having to hold their phone out in front of them, limiting AR to use cases that are infrequent and short, rather than ‘always on’, we’ll see an explosion in the potential for this technology.

Privacy

Privacy was a key theme again this year with Apple continuing to position itself as a leader in this space. The flagship announcement was Single Sign On, an alternative to using a Gmail or Facebook login to a third-party app which prevents the app from accessing any of the user’s personal data. Surprisingly, it’s going to be mandatory if your app offers any existing form of single sign on.

Enhanced privacy settings mean users will also be able to give location access to an App once, and then require it to ask for consent for each following request. Apple was also keen to stress voice and ambient data (ie. speaking to Siri) would be made inaccessible to Apple itself or third parties via end-to-end encryption. 

Apple’s new guidelines specify that Apps that compile information from any source that is not directly from the user or without the user’s explicit consent, even public databases, will not be permitted on the App Store.

Why are we excited?

Worries around data security and privacy are palpable, with tech companies coming under both public and regulatory pressure. An opportunity exists for companies to get out ahead and demonstrate they can handle user data in a responsible and transparent manner. Designing and building with privacy in mind - minimising the information we collect and being transparent about its use - will help in the long run with winning and retain the trust of customers.

Want to know more?

Click here to get in touch