MCP Server
Copilot
Introduction
Copilot is an AI coding assistant that can connect various Large Language Models (LLM) to the QuantConnect MCP Server. This page explains how to set up and use the server with Copilot Chat in the Local Platform. To set up Copilot in other environments, see Using the GitHub MCP Server in the GitHub documentation.
Getting Started
To connect Copilot in Local Platform to the QC MCP Server, follow these steps:
- Install and open Docker Desktop.
- In a terminal, pull the QC MCP Server from Docker Hub.
- Install and open Local Platform.
- Install the GitHub Copilot Chat extension.
- In Local Platform, press Ctrl+Shift+P to open the Command Palette, enter MCP: Open User Configuration, and then press Enter.
- Edit the mcp.json file that opens to include the following QuantConnect configuration:
- Press Ctrl+S to save the mcp.json file.
- In the top navigation bar, click .
- At the bottom of the Chat panel that opens, click to switch to Agent mode.
$ docker pull quantconnect/mcp-server
If you have an ARM chip, add the --platform linux/arm64
option.
{ "servers": { "quantconnect": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "QUANTCONNECT_USER_ID", "-e", "QUANTCONNECT_API_TOKEN", "-e", "AGENT_NAME", "--platform", "<your_platform>", "quantconnect/mcp-server" ], "env": { "QUANTCONNECT_USER_ID": "<your_user_id>", "QUANTCONNECT_API_TOKEN": "<your_api_token>", "AGENT_NAME": "MCP Server" } } } }
To get your user Id and API token, see Request API Token.
Our MCP server is multi-platform capable. The options are linux/amd64
for Intel/AMD chips and linux/arm64
for ARM chips (for example, Apple's M-series chips).
If you simultaneously run multiple agents, set a unique value for the AGENT_NAME
environment variable for each agent to keep record of the request source.

To keep the Docker image up-to-date, in a terminal, pull the latest MCP server from Docker Hub.
$ docker pull quantconnect/mcp-server
If you have an ARM chip, add the --platform linux/arm64
option.
Models
Copilot supports several LLMs that you can use in the Chat panel, including GPT, Claude, and Gemini. To change the model, click the model name at the bottom of the Chat panel and then click the name of the model to use.

To view all the available models for each Copilot plan, see Models in the GitHub documentation.
Quotas
There are no quotas on the QuantConnect API, but Copilot and the LLMs have some. To view the quotas, see Plans for GitHub Copilot in the GitHub documentation and see the quotas of the model you use.
Troubleshooting
The following sections explain some issues you may encounter and how to resolve them.
Connection Error Code -32000
The docker run ...
command in the configuration file also accepts a --name
option, which sets the name of the Docker container when the MCP Server starts running.
If your computer tries to start up two MCP Server containers with the same name, this error occurs.
To avoid the error, remove the --name
option and its value from the configuration file.
For an example of a working configuration file, see Getting Started.
Service Outages
The MCP server relies on the QuantConnect API and the client application. To check the status of the QuantConnect API, see our Status page. To check the status of Copilot, see the Microsoft Status page. To check the status of the LLM, see its status page. For example, Claude users can see the Anthropic Status page.
Other Issues
For more information about troubleshooting the MCP server in Local Platform, see Troubleshoot and debug MCP servers in the VS Code documentation.