As artificial intelligence continues to evolve, its role in enterprise engineering environments – especially PLM – is becoming more compelling. At xLM Solutions, we’ve begun exploring how AI agents can transform the way users interact with Product Lifecycle Management systems by automating workflows, retrieving data, and making smart, contextual decisions on behalf of users.
In a recent video, I showcased how to connect commercial AI large language models (LLM) and chat tools like OpenAI’s ChatGPT and Claude to PLM systems such as 3DEXPERIENCE and SOLIDWORKS PDM. But instead of relying on traditional, brittle API integrations, we used a new approach: Model-Client-Proxy (MCP) – a standardized framework that simplifies how AI agents interface with external systems.
Keep reading for a high-level overview of the concepts behind AI agents and the MCP model, and what we demonstrated in the video. A deeper dive, including configuration details and code samples, will be offered in our upcoming webinar.
What Is an AI Agent?
An AI agent is more than a chatbot. It’s an autonomous system that can take actions on a user’s behalf, learn from interactions, adapt to different contexts, and intelligently integrate with external systems. In the PLM world, that means helping engineers and decision-makers:
- Retrieve part information or BOMs
- Identify discrepancies across systems
- Support risk and compliance analysis
- Improve supply chain responsiveness
Unlike traditional automation, which runs predefined scripts, AI agents operate more dynamically. When connected to machine learning models (we focused on LLM in our demo), they can even improve over time.
The Problem with Traditional Integrations
Historically, integrating AI models with PLM systems required custom code for each connection – writing and maintaining direct API calls for platforms like SOLIDWORKS PDM, 3DEXPERIENCE, Aras, OpenBOM. Autodesk Vault, Arena (these are just some of the systems we support). This approach has some serious drawbacks:
- High maintenance overhead when APIs change
- Security challenges across multiple interfaces
- Scalability issues as systems and use cases expand
In short, traditional integration methods are difficult to manage and not sustainable as AI capabilities grow.
Introducing the MCP Model
The Model-Client-Proxy (MCP) framework offers a more scalable, secure, and standardized alternative. Instead of custom-building every integration, MCP acts as a universal adapter – think of it like a USB-C port for software systems.
With MCP:
- The AI agent connects to a single server
- Requests and responses are handled through a shared protocol
- A discovery and reflection process exposes available MCP server tools to the model
- Contextual prompts allow the agent to intelligently choose which tool to invoke
This allows one integration point for all external systems – whether it’s PLM, ERP, MES, or vendor management – making the setup far more future-proof and manageable.
Claude, ChatGPT, and a Custom Console
In the video, I showcased three different environments where the MCP model was used to connect AI tools to PLM systems:
- Claude AI Desktop Application
Claude was configured to reflect available MCP tools, such as “platform3dx” and “get platform SOLIDWORKS PDM.” When asked for part data, it successfully retrieved information from both systems in real time – without any direct coding or system-specific logic embedded in the prompt.
- Custom Console Application
I also built a simple console tool to show that you don’t need a commercial chatbot. Using Claude in the backend, the same MCP tools were invoked, proving that you can create lightweight, custom applications that leverage AI agent functionality.
- ChatGPT Web Interface
Using a third-party browser extension and MCP tools registered with the model, I showed how ChatGPT could retrieve and even visualize data pulled from external PLM systems. It identified the correct MCP tools on its own and executed the retrieval as if it were a human assistant.
In each scenario, the AI agent was able to seamlessly interact with SOLIDWORKS PDM and 3DEXPERIENCE using a consistent and secure interface.
⸻
What’s Next: Behind the Scenes and Model Building
The goal of this session was to offer a sneak peek at what’s possible. In our upcoming webinar, we’ll go deeper into:
- The backend configuration of the MCP client and server
- Code examples to help you build your own AI agent
- Best practices for managing prompts, tool registration, and user permissions
- A high-level introduction to building or fine-tuning AI models, including:
- Data collection and preprocessing
- Training strategies
- Deployment considerations
- Hosting your model securely (vs. using public LLM chatbots like ChatGPT, Claude, etc.)
This is just the beginning of what’s possible with AI agents in PLM – and at xLM Solutions, we’re actively working with clients to explore and implement these technologies in real-world environments.
Looking Ahead
Stay tuned for our upcoming webinar, where we’ll walk through the technical architecture and implementation steps in more detail.
In the meantime, watch the video, subscribe to our newsletter, follow xLM Solutions on LinkedIn, or reach out to our team if you’re ready to explore AI-powered PLM solutions for your organization.