秘密直播

Devin Ramsden, an AI developer at APL, demonstrates a large language model (LLM) grounded by a direct acyclic graph (DAG) to assist warfighters in administering critical care on the battlefield.

Mission-Focused Generative AI

Inventing the future of artificial intelligence for the nation by advancing frontier models that enable creativity, subject-matter expertise, and personification

Our Contribution

The advent of large, pretrained generative models has led to rapid advancement in artificial intelligence (AI). Scientists and engineers in APL鈥檚 Intelligent Systems Center (ISC) are at the forefront of these advances, from designing conversational AI systems to elevating possibilities for human鈥搑obot teaming.

Our research focuses on domain-specific adaptations of generative models, composable AI systems, generative approaches to machine perception, red-teaming of frontier models, and digital twins of human behavior.

Research

Intelligent Systems Enabled by Generative Models

Diagram of a ConceptAgent AI system showing task planning. Elements include task goal, skill library, precondition checks, and LLM-based decision trees, with a robot interacting in a simulated environment.
Overview of ConceptAgent closed loop task planning and execution. State is composed of text description of the objective, task relevant observations, and task history. ()

Generative AI represents a paradigm shift in which intelligent systems can now use language to perceive, decide, and act鈥攎uch like humans do. ISC researchers are integrating generative AI into intelligent systems to enable open-vocabulary scene understanding, reason about complex activity occurring in long-form videos, and produce photorealistic virtual environments for T&E.


 

Training and Fine-Tuning Generative Models

APL鈥檚 sponsors have difficult missions with unique problems and data that are often not well represented in the training data of frontier models. ISC researchers are investigating techniques to adapt large language models (LLMs) and vision language models (VLMs) to domains and tasks that are critical to our sponsors. This includes:

  • Training an LLM on a large body, or 鈥渃orpus,鈥 of government-specific documents to serve as a backbone for sponsor-specific applications
  • Using AI to generate synthetic data for testing and evaluation (T&E) purposes
  • Extending emerging LLM architectures such as retrieval-augmented generation (RAG)

Much of this work is done in collaboration with the .


 

Digital Twins of Human Behavior

Contemporary generative AI systems can produce passable human behaviors and reactions to stimuli learned from large bodies of data, or 鈥渃orpora.鈥 It is an increasingly important challenge to judge and guarantee the accuracy of generative agents to the human counterparts they are tasked with emulating. To this end, ISC researchers are applying techniques from neuroscience, psychology, and game theory to measure and improve the semantic and behavioral similarities of generative agents to humans, both individually and as a whole.


 

Red-Teaming Frontier Models

Like with any new technology, generative AI comes with risks that must be anticipated and managed. For instance, LLMs risk 鈥渉allucinations鈥濃攆alsehoods that are presented as facts鈥攁nd inadvertently disclosing sensitive information present in the training data.

To address concerns such as these, ISC researchers serve on 鈥渞ed teams鈥 that help our sponsors understand the limitations of their systems, discover vulnerabilities, and develop novel countermeasures.


 

Meet Our Experts

For media inquiries, please contact the APL Public Affairs office.