Introduction
In the rapidly evolving world of AI, the Model Context Protocol (MCP) has emerged as a game changer for creating and integrating AI tools. Whether you want to connect your AI assistant to a database, manage repositories on GitHub, or even troubleshoot browser errors in real-time, MCP provides a standardized way to give your Large Language Models (LLMs) superpowers. This tutorial walks you through everything you need to know—from how MCP differs from a traditional API, to step-by-step instructions for setting up MCP servers and integrating them into your AI workflow.
What Is MCP?
MCP, or Model Context Protocol, is a standardized method that allows AI agents (LLMs) to connect with external tools and services. Think of it like a universal “USB port” for AI: once your AI agent (for example, Cursor, Claude Desktop, n8n, or other frameworks) supports MCP, you can easily “plug in” different servers and services such as GitHub, databases, or even Zapier workflows.
MCP vs. Traditional APIs: The Nail and Screw Analogy
Imagine you’re trying to hang a picture on a wall. With an API-based approach, your AI assistant knows exactly what it needs—a nail, for instance. If those nails are missing, the AI doesn’t know what to do, and everything stops.
MCP, on the other hand, is more flexible. It checks for nails first, but if none are available, it will look for other options—like screws—and adapt accordingly. This adaptability means fewer errors, simpler workflows, and a more robust AI integration experience.
Why MCP Is the Future of AI Integration
Before MCP, developers had to hard-code specific tools into each individual AI agent. Reusing these tools with another framework meant rewriting large chunks of code. With MCP, you can package these tools as an “MCP server” and easily reuse or share them across different AI platforms.
This standardization approach saves massive development time, promotes collaboration, and simplifies how you integrate AI into apps. As more vendors adopt MCP, expect to see a rapidly expanding ecosystem of plug-and-play AI tools.
Step 1: Installing MCP Servers in Your AI Environment
Different MCP servers let you do different tasks. Below are some popular servers and how to add them to your favorite AI coding assistant—for example, Cursor. The instructions are very similar if you use Claude Desktop, Windsurf, or other AI IDE clients.
1. PostgreSQL (Database Integration)
PostgreSQL is a popular database solution. With MCP, your AI can query the database in real-time. Here’s how to set it up:
1.1. Prerequisites
- Node.js Installed (version 18 or newer recommended). Check via:
node --version
- PostgreSQL Database: Make sure you have a running instance (local or remote).
- AI Client with MCP Support: Examples include Cursor, Claude Desktop, or WindSurf.
1.2. Configure Your AI Client
Below are examples for two popular MCP-compatible tools. Adapt these steps as needed for your platform.
1.3.1 Cursor
- Open Cursor and navigate to Settings > Cursor Settings > MCP.
- Click Add new MCP server and name it (e.g., postgres-mcp).
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://localhost/mydb"
]
}
}
}
1.3.2 Claude Desktop
- Open Claude Desktop and go to File > Settings.
- Under Developer, open your
config
JSON file. - Add a configuration block. For example:
{ "name": "postgres-mcp", "run_command": "npx -y mcp-postgres-server postgresql://username:password@localhost:5432/mydatabase", "env": {} }
- Save and restart (or reload) Claude Desktop. Check that postgres-mcp appears in the tool list.
1.4. Testing Your PostgreSQL Integration
Open your AI chat window and ask a question that requires data. For example:
"Show me the 5 most recent entries in my 'customer_orders' table."
Your AI should call the MCP server, run the query, and return any matching rows. You can refine queries or specify columns using natural language commands.
1.5. Troubleshooting & Best Practices
- Connection Errors: Double-check your PostgreSQL URI. Make sure your database user has the correct privileges.
- Firewall Settings: If PostgreSQL is remote, ensure the port (default
5432
) is open and accessible. - Read-Only vs. Write: Some official PostgreSQL MCP servers are read-only by default. If you need write access, look for a variant that supports insert/update, or customize your own.
- Port Conflicts: If the MCP server port is taken, specify a new port by setting an environment variable (e.g.,
PORT=3001
) before thenpx
command.
Why Use PostgreSQL MCP?
- Real-Time Data Access: Instantly query database records without leaving your coding environment.
- Streamlined Workflow: Eliminate manual SQL coding by delegating queries to an AI-powered natural language interface.
- Powerful Integrations: Combine PostgreSQL MCP with other servers (e.g., GitHub MCP, Browser-Tools MCP) to create full-stack AI workflows.
2. GitHub (Version Control Integration)
MCP also supports GitHub integration. This means your AI can push, pull, and manage repositories programmatically.
2.1. Prerequisites
- Node.js Installed: Preferably version 18 or newer. Verify by running
node --version
. - GitHub Personal Access Token (PAT): You’ll need a fine-grained token for API access.
- Compatible AI Client: Such as Cursor, Claude Desktop, WindSurf, etc., with MCP support.
Generating a Personal Access Token (PAT)
- Log in to your GitHub account.
- Go to Settings > Developer Settings > Personal access tokens.
- Click Fine-grained tokens and then Generate new token.
- Select Repository permissions as Read and Write, or other scopes as needed.
- Copy the generated token somewhere secure (you will need it in the next steps).
2.2. Connect GitHub MCP to Your AI Client
Below are two popular setups:
2.3.1 Cursor
- Open Cursor and go to Settings > Cursor Settings > MCP (or “Tools”).
- Select Add new MCP server.
- In the “mcp.json” file, add:
{
"mcpServers": {
"github": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-github"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
}
}
}
2.3.2 Claude Desktop
- Open Claude Desktop and click File > Settings.
- Go to the Developer tab, then open your
config
JSON file. - Add an object specifying the run command and environment variable, for example:
{ "name": "github-mcp", "run_command": "GITHUB_ACCESS_TOKEN=your_token npx -y mcp-github-server", "env": {} }
- Save and restart Claude Desktop (or reload your config). The “github-mcp” tools should now be listed.
2.4. Testing Your GitHub MCP Integration
Open your AI client’s chat, and type something like:
"Create a new repository named 'test-ai-repo' and initialize it with a README."
Your AI should call the GitHub MCP tool to create the repo. Look for a success message. Verify on GitHub to see if it’s been created as requested.
2.5. Troubleshooting & Best Practices
- Token Permissions: Ensure your PAT has the required scopes (e.g., repo: read/write) or you may see authorization errors.
- Port Conflicts: Some setups may default to a specific port. If it’s busy, specify a different port via environment variables or CLI flags.
- Keep Tokens Secure: Never commit your PAT directly to public repos. Use environment variables or secret management tools.
- Repo Existence Checks: If you get an error about an existing repo, confirm the repo name doesn’t exist or remove it before recreating.
Why Use GitHub MCP?
- Code Automation: Simplify tasks like creating branches, pushing commits, and merging pull requests.
- Continuous Deployment: Combine GitHub MCP with other MCP servers (like Zapier or Docker servers) for end-to-end DevOps automation.
- Team Collaboration: Let your AI propose or implement changes. Perfect for rapid prototyping and safe code reviews.
3. Browser Debugging (Real-Time Console Access)
Ever built a web app and struggled with JavaScript errors? With an MCP server for browser debugging, your AI can automatically:
- Open your website in a controlled browser environment.
- Check the console and network for errors.
- Suggest fixes and even apply them in your project.
One example server is browser-tools-mcp, which uses a Chrome extension to let your AI see console logs and fix issues on the fly.
3.1. Prerequisites
- Node.js: Recommended version 18 or later. Verify via:
node --version
- Chromium-Based Browser: Such as Google Chrome or Microsoft Edge.
- AI Client Supporting MCP: Examples include Cursor, Claude Desktop, and WindSurf.
3.2. Clone the Repository
Begin by cloning the browser-tools-mcp repository from GitHub:
git clone https://github.com/context-labs/browser-tools-mcp.git
This creates a folder named browser-tools-mcp in your chosen directory.
3.3. Load the Chrome Extension
- Open Google Chrome and navigate to
chrome://extensions
. - Toggle on Developer Mode (usually in the top-right corner).
- Click Load unpacked and select the
chromeextension
folder found inside the clonedbrowser-tools-mcp
directory.
The extension should now appear in your Chrome extensions list. Ensure it is enabled.
3.4. Install Dependencies
Switch to the repository folder and run npm install:
cd browser-tools-mcp npm install
This command downloads and installs all required Node.js packages.
3.5. Launch the MCP Server
From the same browser-tools-mcp folder, start the server:
npm run serve
If everything is set up correctly, you will see a message indicating the server is running (by default on port 3000).
3.6. Configure Your AI Client
The final step is telling your AI tool or IDE how to connect to browser-tools-mcp. Below are example setups for two popular environments:
3.6.1 Cursor
- Open Cursor and go to Settings > Cursor Settings > MCP.
- Click Add new MCP server.
- Name it (e.g., browser-tools) and in the “mcp.json” file, add:
{
"mcpServers": {
"browser-tools": {
"command": "npx",
"args": ["-y", "@agentdeskai/browser-tools-mcp@1.2.0"],
"enabled": true
}
}
}
3.6.2 Claude Desktop
- Open Claude Desktop and select File > Settings.
- Under Developer, open your config JSON file.
- Add a configuration block referencing
npm run serve
and the path to browser-tools-mcp. - Restart Claude Desktop or reload the configuration.
Once linked, you’ll see browser-tools in the list of available tools, allowing your AI to call functions like getConsoleLogs
or takeScreenshot
.
3.7. Quick Usage Test
Open your AI client, start a chat, and enter something like:
"Please open http://localhost:3000 and take a screenshot of the console."
Watch as your AI retrieves console logs, screenshots, or other browser details—right from your editor.
3.8. Troubleshooting & Best Practices
- Port Conflicts: If 3000 is already in use, run:
PORT=3001 npm run serve
and update your AI config accordingly. - Extension Permissions: Ensure the extension is active and allowed to run in the background.
- Firewall Settings: Local traffic must be permitted on the chosen port for browser-tools-mcp to function.
Why Use browser-tools-mcp?
By integrating browser-tools-mcp into your AI workflow, you can:
Combine Tools: Chain browser tools with other MCP servers (e.g., database queries, GitHub commits) for powerful, end-to-end workflows.
Automate Debugging: Capture console logs, errors, and screenshots with a single request.
Save Time: No need to manually open DevTools—your AI handles everything.
Step 2: Building Your Own MCP Server
If you can’t find a server for the tool you want, you can create your own. For instance, you might want a server that triggers Zapier workflows via webhooks, letting you connect to thousands of apps with minimal effort.
Example: Creating a Zapier Webhook MCP Server
Below is a simplified example that demonstrates building a custom server to trigger a Zapier webhook:
const express = require("express");
const app = express();
// For demonstration, we're using environment variables for the Zapier webhook
const ZAPIER_WEBHOOK_URL = process.env.ZAPIER_WEBHOOK_URL;
// Expose a single "triggerWebhook" tool to the AI
app.post("/triggerWebhook", (req, res) => {
// Basic example that simply sends a POST request to Zapier
// In a real scenario, you'd parse & pass relevant data from req.body
fetch(ZAPIER_WEBHOOK_URL, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: "Triggered via MCP" }),
})
.then(() => res.json({ success: true }))
.catch(() => res.status(500).json({ success: false }));
});
// Start the MCP server
app.listen(3000, () => {
console.log("Zapier Webhook MCP Server running on port 3000");
});
In your AI IDE (Cursor, Claude Desktop, etc.), add this server by running:
ZAPIER_WEBHOOK_URL=https://hooks.zapier.com/... npx -y mcp-zapier-server
Now, when you say, “Trigger my Zapier webhook,” the AI can call /triggerWebhook
under the hood.
Step 3: Putting It All Together
With multiple MCP servers configured (database, GitHub, browser, Zapier, etc.), your AI becomes a powerful orchestrator. For example:
- Database + GitHub: Fetch data from PostgreSQL and push a new commit with the updated records.
- Browser + Debugging + Zapier: Test your web app, fix console errors, and trigger a Zap to send you an alert if something fails.
- Any Third-Party Integration: If a service has an API, you can wrap it in a custom MCP server, making it instantly reusable across multiple AI agents or IDEs.
Checkout list of other MCP Servers that you can integrate. If you would like to combine and use MCPs for Postgress, Github and browser-tools together. In cursor settings, your mcp.json file should look like this:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://localhost/mydb"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
}
},
"browser-tools": {
"command": "npx",
"args": ["-y", "@agentdeskai/browser-tools-mcp@1.2.0"],
"enabled": true
}
}
}
Claude MCP and the Future
Anthropic’s Claude has been a major driving force behind MCP’s adoption. Many see it as the future standard for giving AI “tooling” without tying you to a single platform’s ecosystem. As more companies (and open-source developers) adopt and extend MCP, you can expect:
- More Official Integrations: Expect everything from advanced browser automation to machine learning ops tools.
- Resource Sharing: Standardized access to databases, files, and code bases in real time.
- Continuous Evolution: New features like advanced multi-step agent workflows and on-the-fly resource provisioning.
Conclusion
MCP is revolutionizing how AI agents connect to the world around them. Instead of building ad-hoc integrations for each framework, developers can create and share standardized servers that work anywhere. We’ve walked through how to set up popular servers, debug websites, push code to GitHub, and even create your own Zapier webhook MCP server. The possibilities are endless—and this protocol is still in its early days, so now is the perfect time to start exploring and harnessing its potential.
If you enjoyed this guide, stay tuned for more AI and MCP-related content. Don’t forget to bookmark Coding Money website and subscribe to newsletter.