Khazen

MIT’s deep learning system, Air-Guardian, is a revolutionary step towards enhancing flight safety by working alongside airplane pilots. The system aims to act when a pilot overlooks a critical situation, thereby preventing potential incidents from occurring. Air-Guardian’s core is powered by a unique deep learning system called Liquid Neural Networks (LNN) developed by the MIT Computer Science and Artificial Intelligence Lab (CSAIL). LNNs have already proven their effectiveness in various fields, especially those requiring compute-efficient and explainable AI systems, presenting a promising alternative to popular deep learning models. The operation of Air-Guardian involves a meticulous method to augment flight safety by monitoring both the human pilot’s attention and the AI’s focus. When a discrepancy arises, where a human pilot misses a critical aspect, the AI system intervenes, taking control of that particular flight element. This innovative human-in-the-loop system is engineered to retain the pilot’s control while allowing the AI to cover the lapses. The overarching idea is to develop systems capable of collaborating with humans, aiding them in challenging situations and letting them continue with tasks they excel at.

In scenarios where an airplane is flying close to the ground, and gravitational force becomes unpredictable, Air-Guardian is designed to take over to avert incidents. Similarly, when a pilot is inundated with excessive information displayed on the screens, the AI system filters through the data, emphasizing the crucial information that the pilot might have missed. This level of intervention is particularly vital when a pilot may lose consciousness due to unforeseen circumstances or becomes overwhelmed. A distinctive feature of Air-Guardian is its use of eye-tracking technology to monitor human attention, while employing heatmaps to showcase where the AI system’s focus lies. When a divergence between the human pilot’s attention and the AI’s focus is detected, Air-Guardian steps in to evaluate whether the AI has pinpointed an issue requiring immediate attention. This mechanism ensures that critical issues are not overlooked, and timely interventions are made to maintain flight safety.

The backbone technology, LNN, is revered for its explainability, allowing engineers to delve into the model’s decision-making process, contrasting sharply with traditional deep learning systems known as “black boxes.” LNN’s ability to learn causal relationships within their data makes them significantly robust in real-world settings. Unlike traditional neural networks that often learn incorrect or superficial correlations, LNNs interact with their data to test counterfactual scenarios and grasp cause-and-effect relationships.

Another notable advantage of LNNs is their compactness, enabling them to perform complex tasks using far fewer computational units or “neurons.” This compactness is crucial for edge computing applications like self-driving cars, drones, robots, and aviation, where real-time decisions are paramount, and relying on cloud-based models is not feasible. LNN’s compact nature is a leap towards bringing powerful AI systems to edge devices like smartphones and personal computers, setting the stage for a new wave of AI innovation.

The insights derived from the development of Air-Guardian have broader applications, transcending to scenarios where AI assistants must collaborate with humans. This could range from simple tasks across various applications to complex tasks like automated surgery and autonomous driving where continuous human and AI interaction is essential. The advent of LNNs, as powered in Air-Guardian, is akin to the evolution seen before the influential “transformer” paper was published, hinting at a new era of AI systems built on a fresh foundational model.