While speaking with developers from different areas of expertise, who want to use AI in their projects, we notice a common perception that AI is this magical box that you connect to your project with a few lines of code, let it train for some time, and somehow the system will know what to do, what to say or not to say, will search by itself for data and integrate it.
Obviously it is not the case, and there is some effort and time in building an AI product. An AI project is a pipeline of several building blocks. It starts with collecting different types of data, and proceeds to analyzing and classifying this data with machine learning models. Next, the classification is being translated into actions on a logic-based rules engine, sometimes augmented with a context engine, as described in the figure below.
Sensors / Databases – this is the 1st station where data enters into the system. Data can already exist in databases or being received in real-time from input devices such as microphones, keyboard, cameras, temperature sensors and the like. Some data is clear and an AI system can understand it as-is, such as data flowing from temperature sensors or GPS module. In many other cases, data is abstract and needs to be processed and classified. This is where ML comes into action.
Machine Learning (ML) and Classifications – this part of the pipeline deals with understanding and classifying the data. It can be an image, a sentence or any type of data that needs to be analyzed. For example, when building a chatbot, we need to understand what is the intent and what are the entities in a given sentence by a user. For such task we will use NLU (Natural Language Understanding) service, usually employing ML models, that will extract these components:
“when is the next flight from Boston to NY?”
Intent = flight
Entities – time, Boston, NY
There are several NLU-as-a-service engines that can be used and be trained:
Dialogflow (Google, ex API.ai)
When adding a NLU model to the project, one needs to have a training session. Most services provide a pre-trained model that can be used out of the box and modified with more training, like Luis.ai. Training a model means adding sentences and defining vocabulary terms and their meanings to the model:
How are you? Can be asked as:
How is it going?
How do you do?
How have you been?
All these combinations need to be defined in the model as a “HowAreYou” intent.
When dealing with images in ML models, same principles are used to train the model to the correct classes.
Control mechanism – if the model can receive training from users, a control mechanism added on top of it helps prevent spamming of the ML model. Everyone remembers what happened with chatbot experiment “Tay” by Microsoft. There are ML models that run separately which train and filter content before entering the main ML model, while some services provide it as part of the NLU model.
Context Engine – once classes are retrieved by the ML model, Servo.ai provides a unique context engine to extract an even deeper intention from the already classified information. A context can be created when a set of classifications are grouped together in a unique way. It can be words in a sub-conversation framed in time, or a location grouped with weather, defining a specific driving condition for autonomous car. Servo.ai enables switching between different contexts in real time when new data is processed by the system.
Rules Engine – Much like ML needs a training phase, the rules engine uses a development interface to define its rules and runtime engine to execute them. As one example of a runtime role of a rules engine, we can take an autonomous drone. When it is flying, the rules engine defines its main operation. For take-off, it might set rotor speed to 80%, while for a turn it reduces the speed of some rotors, and so on. But sometimes, a set of predefined conditions for every action is not enough. For example, having the drone follow a specific car involves a ML model running in parallel, constantly identifying the car and providing a label on the image and calculating the car location on the ground. Based on that, the rules engine defines the direction, altitude, speed and other actions the drone should take.
On another use case — building a chatbot — a rules engine is used to structure the conversation. Take a chatbot for a bank customer service, for instance. Sub conversations need to be built around different topics: Wire Transfer, Balance, Transaction History etc. When a user wants to make a transfer, it triggers the Wire Transfer sub-conversation, and if the user suddenly needs to ask for her balance, the context engine will switch to the Balance sub-dialog. Servo.ai provides a behavior tree visual editor for building such complex automation, including with the ability to use subtrees (a certain conversation, for example) as a module, and reuse them across a project or share it.
Once the whole pipeline is trained and defined, the runtime environment will run the rules and send output data to hardware i/o ports, a text or speech engine for an artificial assistant or write back into a database. Press for more info.
While the term AI is commonly used for ML models, we can see there is more to it when building an AI product. ML models can save much time and provide a solution that cannot be achieved in different methods, but in many cases ML can be costly, time-consuming and an overkill – turning on automatically the light when it’s dark can be done with a simple sensor.
Even today when ML models can be very accurate, there is still a need of a failsafe mechanism to provide limits where the ML model is controlling decision making — as in the first law of robotics by Isaac Asimov.
As a guideline, think of machine learning models as you would think of human. You expect it to provide a human behavior but at the same time it learns like a human. This means that it will result in a non-accurate operation or biased result. Rules engine are needed to override ML model in critical junctions – in some cases we would want a car to stop no-matter-what. ML models are statistical models that contain a percentage of error which can not be accepted in some industries. Therefore, the optimal solution would be hybrid – ML models and a rules engine working together