I’ve heard a couple of people on Podcasts mention this idea of Siri being the future operating system. It’s interesting because this is definitely the narrative that Google is pushing with regard to its assistant — desktop, then mobile, next AI— but also because of where Siri has been and is going.
I think we can all agree that Siri came out of the gates first and had some strengths, but has quickly faded in comparison to Alexa and Google assistant. It may still have some areas where it is stronger, but they aren’t numerous. So what’s next for Siri?
Siri shortcuts in iOS 12
It’s easy to group the three Siri announcements from WWDC together and they were intentionally designed that way. The first is that third party apps will be able to offer actions to siri. The example on stage (which I liked as I have one of those tile devices) is to say “find my keys” and your smart key ring will…ring.
This opens up a lot more integrations that were previously unimaginable but there are still questions about how this will work for a service like spotify which can’t offer an action for every song in its library.
The second is an extension of the smart suggestions feature from Siri. Now, Siri will suggest actions based on your location and activity.
So if you are at the cinema, you can turn on do not disturb. Or you can simply tap the suggestion on your lock screen to order an Uber at the same time every morning.
The final feature are shortcuts, which let you chain actions together. This builds upon the previously mentioned features where an app that offers an action will be available to be used in a shortcut and if you use it in a particular situation, siri will suggested it to you.
The net effect is a much more powerful Siri…but it is not the same as alexa or the google assistant.
An cloud assistant
I’ve had a private theory/hope for a while that the reason Siri has been slow to update recently, is because Apple was working on “Siri 2.0”. This would be based in the cloud much like google assistant and Amazon alexa and have a more open API. It might need to be rebuilt from the ground up, so that would mean few changes.
This theory looks like it is wrong now. To some degree this is a form of “Siri 2.0” but it’s certainly not cloud based. All the processing is happening on your device. That makes it unlikely that we’ll be able to have triggers such as, “if it starts raining” (though I’m not a developers and I haven’t seen the APIs so if I’m wrong…cool).
It’s hard to imagine Siri conducting a call for you in the background a la google duplex.
A small step for a voice assistant, a giant leap for Siri
Overall, these changes don’t feel huge when compared to other voice assistants. I’ll admit that there are some unique and helpful aspects of these new changes (in both Siri shortcuts and suggested actions) but “Siri actions” seam behind in comparison with other assistants.
I’m looking forward to these changes, but I still wonder if the future of siri is a more cloud based version.