The coming (augmented) reality of software UI

Recently I was helping my 13-year-old son with some homework from his computer class. The computer language the class is using is called “Scratch” which involves dragging and dropping various blocks to assemble program instructions. “This is dumb—it’s for little kids, I’m never going to use this, it’s a waste of time,” he said, or something to that effect.

The funny thing is, as an adult writing code for business applications, the code editor I use arguably makes things even simpler than Scratch does. After typing a letter or two, a suggestion pops up to add a block of code that goes well beyond what my son might drag in using Scratch. Moreover, I don’t have to navigate many categories to find the block I need. The suggested code is context-aware, taking into account what I’ve written before to assemble a suggestion that often matches exactly what I need, just like the internet search suggestions that sometimes seem creepily aware of what’s on your mind. And this is without the capabilities of new large language models like GPT-4 being developed-- initial versions of these next-gen AI-powered coding suggestions are out there, and if you don’t write code, yes, it’s coming to Microsoft Office 365 as well.

So, at one end of the spectrum, user interfaces are doing more for us automatically. But what about how we interact with the interface? The visuals have improved significantly over the years, but the core components of our interaction with software is rooted in the mouse, invented more than 50 years ago, the monitor tracing back almost 100 years, and the keyboard which goes back a full 150 years. Given the rapid advancements in technology lately, it seems like these could be due for a change as well, right?

Apple thinks they have the answer with the recently announced Vision Pro “augmented reality” headset which puts the monitor on your head, replaces the mouse with eye-tracking and hand gestures and, when needed, allows a virtual keyboard to be summoned anywhere. I admit I’m skeptical, but I’ve been skeptical of astronomically priced, yet supposedly groundbreaking, computing devices before (yes, I’m referring to the iPhone). The iPhone changed the Internet forever by ensuring it is always with us (conveniently, yet obnoxiously). Apple is now suggesting they can make the same thing happen with your keyboard, mouse, and monitor.

As if that’s not enough of a leap, you may recently have read Neuralink has won FDA approval to start testing an implant that reads thoughts. One can imagine such a device may one day further render the standard keyboard irrelevant. I find myself starting to ponder what might happen when large language model AI has access to our thoughts, but I’ll check myself and bring this back to our (current) daily reality.

At Extract, we recognize the importance of UI/UX (User Interface/User Experience). Historically, this has involved considering the most appropriate platform to target for our applications (desktop, web, mobile). From there, we spend a lot of time thinking about how our users interact with our software and striving to make that interaction as intuitive and efficient as possible. We are far from tearing up our roadmap in the face of radical shifts in how we can interact with our software. Still, these possibilities stand to give us at Extract, as well as the larger the software development industry, plenty to think about in the upcoming years. There are interesting times ahead and I look forward to considering how we may someday use these new (augmented) realities to improve how our users interact with the software we create.


Written by: Steve Kurth, Software Development Manager