o1design

01design creates digital experiences through a structured Design Thinking process. Since 2004, we have created hundreds of apps for mobile devices. We work with B2B/B2C corporate clients and start-ups, dedicating a work-for-equity programme to support them in the development of their product or service. Our challenge is to design the interfaces of tomorrow.

Multimodal User Interface

Conversational Interfaces represent a paradigm shift in the history of interaction because through natural language it is possible to interact immediately with any machine. In many cases, however, the speed of voice commands must be accompanied by visual elements. Visual scanning is often faster than the sequential access to information imposed by voice output.

At 01design we create Multimodal User Interfaces that combine the speed of conversational interaction with the efficiency of graphical interfaces. Our goal is to design hybrid interfaces capable of providing services and information devices through the best possible interaction (touch+type+voice). Our focus on User Experience has led us to develop Hybrid UIs capable of increasing the efficiency of interaction through voice commands without losing the functional and emotional part that we ensure with great Graphical User Interface.

User Experience

Our focus is on the end user so that anyone can easily reach, understand and use the service we offer across multiple channels. Over the past 20 years we have solved all kinds of challenges, including low-level integrations, ad-hoc feature development, AI, IoT, voice interaction and UGC content integration.

We integrate various deep learning solutions to develop next-generation mobile applications that simplify user interaction. In the design of conversational interfaces, we use Automatic Speech Recognition (ASR) to transcribe text from an audio signal, Natural Language Processing (NLP) to derive meaning from the transcribed text (ASR output) and finally, Speech Synthesis or Text-To-Speech (TTS) for the artificial production of human language from text. We use consolidated conversational design (CxD) processes and rapid prototyping software to develop next-generation Pervasive Interfaces.

Technologies

We use a Kubernetes Server Architecture based on node structures and microservices for our apps. This allows us to be scalable to both initial and future needs without increased cost but adapting to needs dynamically. The microservices structure also helps us keep the infrastructure simple without tying us down to server technologies.

The AI components are also based on microservices, with nodes dedicated to Docker Linux and Python machines. The infrastructure behind kubernetes is Google Cloud Platform and Amazon AWS. The machines chosen are EC2 for the logic part and RDS for the database (MongoDB). We master native and hybrid development on different platforms, achieving multiple benefits both in terms of cost and deployment time. For front-end web applications, on the other hand, we use Angular, React, and Javascript.