Where AI is Heading

Predicting and exploring three emerging trends in Artificial Intelligence!

Essay: Where AI is Heading

[1]

Introduction

Artificial Intelligence (AI) is not just a technological advancement; it's a revolution reshaping many facets of human life. From mundane tasks to complex problem-solving, AI is poised to transform our world in ways we can only begin to imagine. As we stand at the cusp of this transformation, it is crucial to explore where AI is heading. This essay will delve into the future trajectory of AI, examining the progression from current advancements to potential future applications, and consider the ethical and societal implications of these developments.

1. Intent-Driven Computing

  • Applications Disappear: With the advent of sophisticated AI, traditional applications will become obsolete. Voice[i] and vision user interfaces (UIs) will dominate[ii], allowing AI to decipher user intent and perform appropriate actions seamlessly. This is a paradigm shift from the user knowing what they want to do (and launching an application) to the AI learning what they are trying to accomplish and then either delivering or constructing an application to meet their implicit request (on the fly).  This already has strong precedent in the programming community in the shift from imperative build tools (like “make”) towards declarative build tools like “maven”.  So, given we now have something approaching AGI it is not unreasonable for it to figure out what we want to do given a description of the end state (our goal or objective).  The result of this shift  towards intent-driven computing will be a less-frustrating user experience, tailored application and the  elimination of narrow, specific “task-based” apps[iii]​.  One early example of this is how ChatGPT will construct an application on the fly to perform data analysis given a spreadsheet of data. 

[2]

  • AI Understanding and Learning: Unlike current applications that follow fixed, narrowly defined paths, future AI systems will continuously listen, understand, and learn from users' needs. This evolution will automate a wide range of tasks, making human-computer interaction more intuitive and efficient. The move towards zero-touch operations in networks exemplifies this shift, where AI not only manages but also optimizes services based on user intent​[iv]​.

2. The Rise of Semantic AI

  • Proliferation of Sensors: The increasing number of sensors[v] will provide vast amounts of data about our environment, enhancing our ability to monitor and understand the world. Semantic AI will utilize this data to make informed decisions, improving system responsiveness and accuracy​.
  • AI-in-the-Loop: Governments will only permit AI to take action autonomously when it is sufficiently trusted (thus currently requiring a “Human-in-the-loop”[vi]). Until full trust is achieved, a hybrid approach will dominate, where AI collaborates with human oversight to ensure safety and reliability. This combination ensures a balance between automation and human judgment, crucial in sectors like healthcare and defense.   The key to achieving AI-in-the-Loop is the combination of Semantic and Probabilistic techniques.

 

  • Semantic and Probabilistic Techniques: Combining semantic AI (which understands and processes meaning) with probabilistic techniques (which deal with uncertainty and probabilities) will improve AI's trustworthiness and explainability[vii].   This hybrid approach will become increasingly effective and thus enhance the accuracy and reliability of AI systems, fostering greater public trust which will lead to AI-in-the-loop. For instance, cognitive networks leverage these techniques to manage and optimize network operations autonomously while maintaining high levels of trust and transparency​​.  Another common example of this hybrid approach that is proving valuable is the integration of a “knowledge graph”[viii] with large language models to enable rich context to be fed to the LLM as background instructions.  You can also use LLMs to assist in the production of these explainable semantic models, thus speeding up a significant barrier to the proliferation of expert systems in the past.  Creating ontologies and rules are time-consuming, difficult and requires deep domain knowledge.  Modern AI can assist in generating these models by sifting through unstructured data and extracting the entities, relationships that connect them and logical rules that govern them.    In that scenario, humans take on the validation and refinement role while the AI does the hard data mining work to create a rough, first draft.  How this is critical to trust is to think of set theory:  the set of all civilians should forever remain distinct from the set of enemy combatants that an AI would fire upon.

Even more fundamental than knowledge graphs and set theory are the representation of mathematical and logical concepts like dates, numbers and formulas.  Those have very well-defined rules that are easily represented within a computer system.  The knowledge associated with all of these are both basic and critical to the very concept of reasoning.  For example, understanding how the language expressions of “before” and “after” can be applied to dates.

While the above is just a sampling of these hybrid techniques, Semantic AI will involve the constant interplay between pattern-matching and rules, rules and pattern-matching, on and on in a virtuous cycle.  In my opinion, combining these two will be the only way we get both the requisite trust (of confidence thresholds) and explainability (of entities with measurable characteristics and rules).  This is not an either or question, the safe path lies in combining both.

3. Robotic Guardians and the Golden Age of Robotics

  • Mainstream Robotics: Robotics will transition from niche applications to mainstream adoption, marking the golden age of robotics. Robotic entities will become commonplace in various societal roles, enhancing efficiency and productivity. Examples include robotic police[ix], soldiers, servants, and workers, transforming industries and public services[x].

  • Good vs. Evil Applications: As robotics become more prevalent, both beneficial and malicious applications will emerge, leading to a cat-and-mouse game between ethical use and abuse of technology. This dynamic necessitates robust monitoring and predictive systems to preempt and address AI-related crimes.

The doomsday predictions around AI are just dystopian extrapolation or alarmism. Instead of doomsday, we are seeing more of a tension or tug of war between beneficial versus malicious uses of the technology (like any other tool). Thus, our AI guardians will battle the AI malware in a continuous cat and mouse fight mimicking the constant one-upmanship in all the other security realms (physical, virtual and even space). In cybersecurity, we have white-hat versus black-hat hackers competing against each other.  Offensive Cyber and defensive cyber constantly battling for supremacy.

 

This is even more important as robotics go mainstream because we will be entering an arena where humans can’t compete.  For example, imagine trying to defend against a drone swarm.  Humans cannot and will never be able to process the targeting sequence for 10,000 drones coming at you in multiple different attack patterns simultaneously. You will have to trust your AI defenses enough to let them go “weapons free”.

 

  • Transparency and Accountability: To combat malicious AI, transparency and accountability in AI development and deployment will become essential[xi]. Monitoring and predicting AI-related crimes will be crucial to maintaining public safety. The rise of malicious AI applications will prompt stricter regulations and severe penalties for technologies aiding malicious activities, reflecting the serious threat they pose to society​​.
  • Harsh Penalties for Malicious AI: Recognizing the exponential threat posed by malicious AI, governments will enact severe penalties for the creation and dissemination of technologies that aid or abet malicious AI activities. This legal framework will aim to deter potential wrongdoers and protect society from the risks associated with malevolent AI.[xii] 

Implications and Ethical Considerations

  • AI Safety: Ensuring the safety of AI systems is paramount as they take on more responsibilities in society.
  • Trust and Explainability: Maintaining public trust through transparent and understandable AI decision-making processes is crucial.
  • Regulation and Governance: Establishing robust regulatory frameworks to govern AI deployment, ensuring it operates within ethical boundaries and benefits society as a whole.
  • Impact on Jobs and Automation: As AI and robots become more prevalent, their impact on employment and the nature of work will require careful management and adaptation.

4. Timeline: 

            Speculating on timeframes is always tricky but it is important to make an attempt to bound any discussion of the future in terms of how close we are to achieving the projected vision.  Below I present a timeline of the three axes presented in this paper: Intent-driven computing, Semantic AI and Robotic guardians.  These are roughly seen as overlapping trend lines occurring in sequence and following a common adoption pattern. 

Conclusion

AI is on a path of rapid advancement, moving from current applications to more autonomous and integrated systems. As we look to the future, the potential for AI to transform society is immense. However, this potential must be managed carefully, with a focus on ethical considerations and regulatory frameworks to ensure that AI serves humanity's best interests. The journey of AI is one of continual growth and adaptation, promising a future where AI is an indispensable ally in addressing the challenges and opportunities of our time. By embracing these advancements while remaining vigilant about their implications, we can navigate the path towards a future enriched and safeguarded by intelligent technologies.

 

About the Author:

Michael C. Daconta is currently the Vice President of AI Solutions at Parsons Corporation.  He is a well-known author, technologist and leader having authored or co-authored 14 books (13 technical and 1 philosophy), numerous magazine articles and online columns.  Previously, Mr. Daconta was the Metadata Program Manager for the Department of Homeland Security where he spear-headed data standardization, stewardship, and metadata registration.  He was selected by the Office of Management and Budget and the Federal CIO Council to lead the Federal Enterprise Architecture (FEA) Data Reference Model (DRM) Working group which successfully delivered DRM V2.0 in December 2005.  In conjunction with the Department of Justice he launched the National Information Exchange Model (NIEM) to provide a reusable set of core XML components for building exchange packages.  For his work at DHS, Mr. Daconta was selected to the prestigious “Fed 100” by Federal Computer Week magazine.  Other past assignments include the Chief Architect of the Defense Intelligence Agency’s Virtual Knowledge Base Project and designer of the electronic mortgage XML standard for Fannie Mae.  Mr. Daconta was awarded patent #7299408 by the USPTO for his electronic document validator he invented for Fannie Mae.  Recent books include “The Great Cloud Migration: Your roadmap to Cloud Computing, Big Data and Linked Data” and “Lazy Programmers: The Good, the Bad and the Ugly”.  His other books cover Information Management, the Semantic Web, XML, XUL, Java, C++ and C.

Mr. Daconta’s experience spans over thirty-six years in Software Development, System Architecture and Enterprise Data Management.  He has personally developed large-scale computing systems, led numerous development teams, and brought forth innovations in the areas of simulations, telemedicine, and Intelligence processing.  Mr. Daconta began his career as a Military Intelligence Officer and has since worked on many projects for the Intelligence community.  He earned his master’s degree in Computer Science from Nova Southeastern University (NSU) and his bachelor’s degree in Computer Science from New York University (NYU). 

 



[1]This image was generated by ChatGPT4-o after chatting about the three main topics of this essay.  The image was generated via a single prompt.  I purposefully left the misspellings in there to represent the fact that this is a forward-looking essay and we are not there yet.  Here is the prompt that generated the image: “Generate an image of this proposed future state that captures these three axes.”

[2]Here we have a common scenario that lends itself to visualization and a voice interface – discussing and preparing the evening meal with the assistance of AI.



[i]https://research.google/blog/ai-in-software-engineering-at-google-progress-and-the-path-ahead/

[vii]https://www.sciencedirect.com/science/article/pii/S1364661320300510