LiveNX 25.3 introduces a new Model Context Protocol (MCP) architecture. Under this design, customer data is no longer stored in the cloud. The language model does not retain or store user queries or data after a session, and only selective data is securely transmitted to the LLM. The MCP provides summarized context from prior exchanges to the model, so it can generate coherent and relevant responses. This strengthens data controls while maintaining the benefits of AI-assisted troubleshooting.
Network data continues to flow into LiveNX and is stored in the customer’s local instance across multiple internal repositories. When a user submits a query through the natural language interface, MCP acts as the standardized mechanism for accessing this data and transmitting the data required to respond to the query to the LLM hosted in AWS.
There are multiple MCP servers within this architecture. These servers connect to various APIs, including a dedicated MCP server for ClickHouse, and reside within LiveNX. In addition, an MCP server exists in the LiveAssist cloud environment in AWS, which provides access to the datasets, through the client, when the LLM makes requests. This distributed MCP design allows both local and cloud components to collaborate securely while maintaining data control boundaries.
The LLM use Anthropic Claude as the foundation model and is hosted in AWS Bedrock. AWS Bedrock does not share customer data with Anthropic and does not use this information to train the foundation models.
The LLM is not limited to a single tool, when prompted, it can draw from multiple data sources simultaneously within LiveNX, creating richer context and deeper insights. With ability to review network data structures (flows, security events, telemetry, synthetic test data etc.) when prompted, the model can interpret, correlate, and prioritize issues for faster troubleshooting by the user.