Header Title
Visualising the (truly) autonomous world
Changing perceptions by showing us what infrastructure-free localisation enables autonomous vehicles to see
Mapping autonomous capabilities
There’s a whole lot of talk about autonomous vehicles – whether in the air, on the roads or in our factories. Generalising horribly, most of the discussion is pretty vague, and this vagueness probably has a lot to do with basic ignorance on the subject. There also seems to be a latent, built-in “driver’s seat” perspective, as we struggle to deal with our potential disempowerment and real fears about giving up control (blithely disregarding the multiple mindless imperfections of our human equivalent) of our automobile kingdoms.
A UK software company has done a remarkable job of addressing many of these communication issues in its website and marketing.Oxbotica – a 2014 spin-out from the Oxford Robotics Institute research group at Oxford University – is a company that develops the software that actually makes vehicles autonomous, using pioneering, platform-agnostic technology. It provides the “robotic brain” that makes mobile autonomy happen, and the vehicles housing the system “learn” while operating – much more than just coupling together cameras, radars and sensors to be responsive to the surroundings.
Oxbotica talks about a spectrum of autonomous competencies, and in particular about infrastructure-free localisation to make these possible, This means being completely independent of GPS, satellites and other fundamentally undependable infrastructure. The software can – literally – be used in anything that moves. And the company has a “mission to bring autonomous software to the world’s biggest markets.” Big ambitions, no limits.
Show not tell
The Oxbotica visual universe – the way they actually present their ideas to us mere mortals – is outside the vehicle, but seen from inside the “AI head”.
The company’s website and marketing show us (as above) what the software “sees”, and thus paves the way to a much more subtle appreciation of the capabilities, capacity and actions of what actually drives the whole concept of mobile autonomy, via oomphed-up situational awareness designed to answer the key questions of “where am I, what’s around me and what do I do?” There’s a much bigger agenda here than mere obstacle avoidance, as explained here by Oxbotica founder Paul Newman.
Making the intangible easier to grasp
Oxbotica is a great example of a company that has put a lot of thought (and/or a big dollop of creativity) into showing what their intangible technology really is and how to show a wider audience how it works, in visually digestible layman’s terms. This means much wider appreciation of the company’s capabilities, as well as better understanding of the whole context in which the technology can be used. Which all adds up to market potential and intellectual property value.
YouTube user greentheonly created a somewhat similar way of way of conveying the same kind of message by overlaying key data used by the Tesla Autopilot feature onto dashcam footage, although this is on a much simpler, positioning level.
The point lies in the importance of finding innovative ways of communicating complex technologies, going beyond the usual software/engineering tunnel vision and pre-occupation with details to find an effective visual and communication language about abstract, intangible digital capabilities. There seems to be a big growth opportunity here.