Apart from that Apple still thinks we use photography as we did it 30 years ago: we go on a trip, take a bunch of photo’s then struggle with how to show our friends these photos when we get back from our trip.
Well, I’ve got news for you Apple; that’s maybe 1% of photography, and not really an issue most of us deal with.
What is the problem that needs fixing? It is that photography is changing. I showed my girlfriend some tiny text on the back of a credit card. Without hesitating she pulled out her camera, took a photo, and then zoomed in on the photo to read the text.
I can’t say that I disagree with van Santen on this point. All of my pictures are not of trips or specific people alone. If you look at my Camera Roll, you will certainly see a collection of photos I’ve taken of my kids, my family, and my friends. But you will also find pictures of work whiteboards, screenshots of clever tweets from Twitter, UI example screenshots, pictures of receipts, and so on, and so on.
I love the idea of classification at the image level. Apple does it with selfies (a recent addition), but that’s a bit of a cheat because it’s based on the camera being used. I’m interested to see how (or if) their new AI processing can help auto-classify some of these pictures (think receipts for a start).
I was bicycling along the canals with my teenage daughter when she spotted a ‘missing cat’ poster. She pulled out her phone, took a photo of the poster without looking twice, and put the phone back into her pocket. I said ‘That’s pretty smart’ and she replied; ‘Well how else are we going to remember all that information if we ever find that cat?’
What if the system could know that the image was an informational poster? And take that information and convert that image into a note and create an entry in the built-in Notes app? That’s intelligence. That’s allowing the data and the image to work together to implement the intent of the original picture.
Looking forward to see what happens in this space…