Where to start with AI as a Diplomat? - September 2025 Edition

AI continues to grow at a dizzying pace. This beginner's guide helps diplomats get familiar with AI tools, regardless of their current proficiency.

Where to start with AI as a Diplomat? - September 2025 Edition
Photo by Denys Nevozhai / Unsplash

9/5/25 Changes: Added Stage 4 and 5. Updated notes on Anthropic’s prompt-builder tool.

AI continues to grow at a dizzying pace. A friend asked me this week about how to learn–not to become a master AI but to get more comfortable. I offered (or insisted) on writing a brief sketch, so here it is.

I am specifically writing this for people who either can’t (or don’t want) to hook up all of their work systems to AI applications. For diplomats, there are often security limitations. Plenty of other people just don’t want to for security, privacy, and myriad other reasons. Everything until Stage 4 requires no integrations at all–giving you plenty of material to work with. 

How to think about AI use

Two simple principles stand out:

Only let AI do things you could do yourself with sufficient time. This has been one of the most useful tips I’ve ever gotten. If I fundamentally don’t understand what an AI is doing, I can’t audit it, check its work, or decide if it is on the right track. In my core domains, I can usually tell pretty quickly whether.

You must decide what to do; an AI can only advise or execute. To borrow a line attributed to IBM: A computer can never be held accountable; therefore a computer must never make a management decision. I’m comfortable asking an AI for other perspectives, to research information*, or even execute simple tasks on my behalf. But I won’t let it play with bigger stakes. 

Stage 0: Getting Started

Start here if you only have free AI tools today. 

Go out and get one paid subscription to a consumer-focused AI chat tool. You can pick ChatGPT, Claude, or Gemini. If you are on the fence or hesitant, pick Claude; its conversational ability feels the most natural to me. 

After you have done that, begin experimenting by asking some questions of your new AI chat tool. I’d recommend starting with topics that aren’t about work so you can get a feel for the range of its capabilities. Here are some of my favorites. Pay a little bit of attention to the structure: 

  • Help me choose somewhere to go for dinner tonight. I’m currently in XXX, and I want to eat out near XXX. I’d like something spicy and vegetarian, without a long wait to eat at 6pm. 
  • I want to go for a hike on Monday. I'm in XXX. I want to walk about 3 miles. Can you give me some options and tell me how to dress in a weather-appropriate way?
  • On Tuesday, I want to take a 10 mile bike ride that starts at Grove 10 in Rock Creek Park. Can you help me chart a route that ends at Grove 10 and gets me the right mileage?

You can adjust the details as you like, as well as add other details. 

As you make different requests, start them each in a new chat/conversation. Currently, none of the AI platforms carry context from one conversation to another, so you will have a fresh start each time. 

The AI may ask you clarifying questions. It may also take a first stab and get it wrong. Clear, concise feedback are useful. Try giving responses and see if you get what you want. If you reach a dead end or the discussion drifts to an unhelpful place, that’s ok – try starting fresh in a new conversation and add more information to your initial request. 

For our last prompt in this session, ask the AI the following in a new conversation: 

  • I'm a diplomat. What are three things I need to know about the news from this week?

It’ll likely take a few minutes to think before it gives you something. It will almost certainly be underdeveloped for what you need… which is a good segue into our next section. 

Stage 1: Learning Prompting 

Start here if you have a monthly subscription to a tool but nothing more. 

If you have completed the last section or played around with an AI, you’ll probably have noticed that some AI responses are considerably better than others. A big part of this is driven by the prompt that you have given the AI, which gives it directions and limitations, as well as context of what you want it to do. 

Unless you are using more advanced features, AIs only have access to the information you have shared in a conversation to understand your needs and motivations. That’s why you need to give them the right prompt.   

Structure. My prompts were just blobs of text before. This is more challenging for LLMs to comprehend and is often unclear regarding the desired outcome. A simple structure that works much better:

A good prompt will deliver a few common elements: (1) Context (2) A Task; (3) Constraints; (4) a desired output and format. Additionally for more complex prompts, you should tell the AI explicitly how to decompose the task and to engage in refinement or iteration. 

As a result, a simple one-shot prompt might look like this. 

I am a senior XXX diplomat currently working on XXX. You are my staff assistant providing me a morning briefing on the key developments that affect our mission. Today, I need you to highlight the five global news developments that may impact our work in XXX, what happened recently, and what risk or opportunity it might create for us. Please ignore developments that would not come up in discussions with the public or with host government officials. For each issue, please summarize the issue in one paragraph, then provide me bulleted points that represent your assessment of the risks and opportunities it poses. Please consult both news sources and political analysis to shape your perspective. [Create this as an Artifact.] Once you have written your briefing, please review it again. Check your sources thoroughly, ensure that you have not missed key facts or perspectives from global commentators. Finally, review the text for brevity, delivering only the critical information. 

If you are using Gemini or ChatGPT, omit the bracketed language in the middle. 

Take a look at how this prompt lays out each of these elements:

  • Context: it explains who you are, what help you need, and why you need it. It would be even better if you narrowed it with a topical area of focus, like trade negotiations.
  • Task: It gives a very clear task: develop a morning news brief.
  • Constraints: This provides specific types of information I want–risks and opportunities, with no information that isn’t relevant in my country. 
  • Desired output: I described briefly what I want: a summary paragraph and bullets. 
  • Review: I asked it to take a series of steps and to check its own work before sending it to me. 

When you run this prompt, you’ll likely see that your vision of the output is slightly different than the AI product. That’s ok – this is a fairly loose prompt. If you want it to more closely align, be more prescriptive in what you lay out. A better prompt might look like this: 

CONTEXT

You are a staff assistant to a senior U.S. diplomat currently stationed in Brazil. Your role is to provide concise, actionable intelligence briefings that directly support diplomatic operations and decision-making in the Brazilian context. The diplomat needs to stay informed about global developments that could influence bilateral relations, regional dynamics, or broader U.S. interests in Latin America.

TASK

Prepare a morning briefing identifying the five most significant global news developments from the past 24-48 hours that have potential implications for U.S. diplomatic work in Brazil. For each development, provide both factual summary and strategic analysis of potential impacts on the mission.

CONSTRAINTS & DETAILS

  • Relevance filter: Include only developments likely to arise in discussions with Brazilian government officials, civil society leaders, business community, or informed public
  • Exclude: Purely domestic U.S. news, minor regional events outside Latin America, or developments with no clear Brazilian connection
  • Time frame: Focus on developments from the past 24-48 hours, with brief context for ongoing situations
  • Sources: Consult reputable international news outlets and political analysis from recognized experts/institutions
  • Perspective: Maintain diplomatic objectivity while clearly identifying U.S. interests and concerns

DESIRED OUTPUT

For each of the five developments:

  1. Issue Summary (1 paragraph): Concise overview of what happened, key actors involved, and immediate context
  2. Risk Assessment (bulleted points):
    • Potential negative implications for U.S.-Brazil relations
    • Threats to U.S. regional interests
    • Diplomatic challenges that may arise
  3. Opportunity Assessment (bulleted points):
    • Potential positive developments for bilateral cooperation
    • Strategic openings for U.S. engagement
    • Areas where U.S. leadership could be beneficial

REVIEW & REFINEMENT INSTRUCTIONS

Before finalizing your briefing:

  1. Source verification: Cross-reference key facts across multiple reliable sources; flag any information that appears in only one outlet
  2. Completeness check: Ensure you haven't overlooked major perspectives from Latin American analysts, Brazilian media, or other relevant regional voices
  3. Relevance audit: Re-examine each item to confirm it will genuinely matter to someone working in the U.S. Embassy in Brasília
  4. Brevity optimization: Eliminate redundant information, diplomatic jargon, and unnecessary background. Each summary should be digestible in under 2 minutes
  5. Action orientation: Ensure your risk/opportunity assessments provide clear strategic value rather than just listing abstract possibilities

This may seem excessive. But better instructions yield better results. 

If it seems daunting to write a full instruction file like this, that’s okay. There’s another way! 

Write a paragraph of directions, then ask your AI to build out a set of instructions that you intend to give another AI. Give it the section headers you want, along with the guidance you have in mind. Let it write an output, then either provide feedback so it can revise or make manual revisions yourself. Then paste the complete instructions into a new conversation and see what happens. 

My final pro-tip: if you are using Claude, ask it to generate documents as Artifacts. Artifacts are like documents. This has a couple of advantages: it keeps your chat tidier; it’s easy to download or copy Artifacts; and it’s also easy to ask the AI to revise an artifact (and to browse past versions). 

There are great resources if you want to dive deeper on this topic. Lenny’s Newsletter did a 90 minute interview on prompt engineering. Anthropic has a (slightly more technical) prompt engineering guide, as does OpenAI

Claude now has an excellent Prompt Generator in the Anthropic Console, which can help you develop and refine a more sophisticated prompt. Update 9/5/25

Stage 2: Layer on your personal context

Start here if you feel comfortable writing a multi-step prompt.

If you’ve noticed that a lot of your AI responses are fairly generic or that you are repeatedly entering the same sort of prompts, it’s time to write your own personal context file. I personally keep two of them: one is a set of details about me and my communications preferences, and a second is a style and research guide for writing. Here are some things you might want to write down:

Professional Background: Current role, sector, and key responsibilities. Areas of expertise and specializations. Professional goals or projects you're working on

Communication Preferences: Preferred level of detail (concise vs. comprehensive explanations). Whether you prefer step-by-step instructions or high-level guidance. Tone preferences (formal, casual, direct, etc.)

Output Preferences: Preferred formats (lists, prose, etc.). Whether you want actionable next steps included. How much background context you typically need. 

Context About Your Work/Projects: Current projects or initiatives you're focused on. Recurring tasks or challenges you face. Team structure or collaboration style. Sector-specific terminology or frameworks you use

Learning and Problem-Solving Style: How you prefer to receive new information. Whether you learn better through examples, analogies, or abstract concepts. Your experience level with different topics (beginner, intermediate, advanced). Areas where you want to be challenged vs. need more support

Practical Constraints Time limitations or deadlines you're working with. Technical limitations or requirements. Organizational policies or constraints that affect suggestions. 

Personal Interests and Values: Relevant hobbies or interests that might inform examples. Values that should guide recommendations (sustainability, efficiency, etc.). Topics you're curious about or want to explore.

Start by making a Google Doc or Word doc with a few of these. You don’t need all of them right now! 

The next time you write a prompt for the AI, attach the document and ask the AI to reference it when developing a response. That can be a simple reference or a clear, explicit sentence like these:

Please reference my background and communication preferences in my context file in formulating your response…

Or

Using the communication preferences and technical background from my context file, explain how the new U.S. tariffs work and their likely impacts.

In either case, the AI should start drawing on some of the details. 

While ChatGPT currently has a memory function designed to remember details like this about you and use them across conversations, there are two good arguments for managing this yourself. First, you can use this context file with any model or tool. All of current tools support including a document like this, not just ChatGPT! Second, you control explicitly what goes in the document. You decide what you share with an AI and when.

Stage 3: Choose among models

Start here if you have a personal context document built. 

As you are getting more fluent and try more things, you likely start to see tasks where your chosen AI platform doesn’t perform as well as you would like. Claude, for instance, is not very good at identifying things in photos. This is where the fun starts. 

Each of the three major platforms has strengths and weaknesses. It’s worth signing up for a free account for the other two platforms. Free accounts typically have fewer features and a lower rate limit (aka how many messages you can send) but still provide you some account access. 

For this section, your main task is to find a prompt that has underperformed for you on your paid account and give it to the other two platforms. Take a minute to look at each of the responses. Does one stand out as better to you? Try a few more prompts across all three platforms to get a feel for how each one is different.

There is one more wrinkle. In addition to different platforms, each platform has multiple models designed to use more or less computing power to do different tasks. As of right now, you generally can choose between two models on each of Claude, Gemini, and ChatGPT. If you are on a free account, you may only have access to the lower power model.  

ChatGPT’s August updates are unique, in that they now toggle between different models – actually 3-4 different models under the hood – depending on the task you present it. At launch, it defaulted to low power models, resulting in poor performance on tasks. Since launch, they have adjusted it to draw on higher power models more often. Paid users can also manually select GPT 5 “thinking” to default to the highest power model. As a free user, you are unable to choose but you can encourage it to “think harder” which often triggers it to do exactly that. (Updated 9/5/25)

Provider

Lower Power

Higher Power

ChatGPT (OpenAI)

GPT 5* 

GPT 5 (thinking)

Claude (Anthropic)

Sonnet

Opus

Gemini (Google)

2.5 Flash

2.5 Pro

*actually several models under the hood 

On low complexity tasks, you likely will not see a huge performance difference between a lower and higher power model. If you’re looking for a lunch spot, either one will do equally well. But with more complex, multi-step tasks, you will start to see significant quality differences. 

So why not always use the higher power model? Well – you’re going to get rate-limited more quickly. With Claude, Opus burns through your rate cap 5x faster than Sonnet does. It often only takes me 20 minutes to get ratelimited using Opus, preventing me from sending new messages for 4-5 hours.

Stage 4: Build Your Projects 

Start here if you have your own personal context file built.

If you have been experimenting with LLMs and a personal context file, you’ll notice something: you end up attaching the same documents to each new conversation, and you end up writing variations on the same prompt over and over again. Projects help you move faster by bundling together a consistent set of contextual documents and system instructions, allowing you to reuse them over and over again. Each new conversation inside a project has access to all documents attached to the project, and it will follow the guidance in both the project’s system instructions and your prompt. So what differentiates system instructions and a prompt?

System instructions have a priority in guiding the model’s response, and they are consistent guidance from conversation to conversation. For instance, you might use system instructions to define the AI’s role, describe what behavior or personality you want it to exercise, provide it with consistent rules and guidelines for responses, or describe a specific format that you want all responses to adhere to. 

Why would you want this? 

Let’s say you are setting up a project to help you summarize documents. Over multiple interactions with LLMs, you have learned that you need to provide some standard language in prompts to get what you want: you might have a preferred way of receiving the information (e.g., a chapter summary of 3-5 bullets, each with 2-3 sentences), as well as a preferred tone (e.g., academic). You may also want it to provide citations. 

All of that guidance can go in project instructions, and you’ll never have to retype it in the prompt window again. What you ask it to summarize may change (the prompt), but the way you want it done doesn’t change (the system instructions). 

System Instructions:

  • Set the foundational behavior and personality of the AI
  • Establish persistent rules, guidelines, and constraints that apply throughout the conversation
  • Define the AI's "role" (e.g., helpful assistant, coding expert, creative writer)
  • Typically remain constant across multiple interactions
  • Have high priority in guiding the model's responses

Prompts:

  • Are the specific user inputs, questions, or requests
  • Vary from message to message based on what the user wants
  • Build on top of the system instructions
  • Can include task-specific context, examples, or formatting requests
  • Drive the particular response the user is seeking

You also probably have some standard context you want to provide all your projects – this is why I encourage you to make a personal context file that can be reused as either an attachment or copied into your projects! A common instruction in my projects is a variation on “reference the personal context file.” That encourages the LLM to read and consider the contents of your reference document early in every conversation. Note: an LLM will not follow instructions verbatim from a referenced document, so if you have specific things you need done a particular way, put it directly in the system instructions. 

With all of this in hand, try making your first project. If you’re stumped, take a look at your previous conversations to find types of discussions you have had more than once. 

If you’re still stumped, here’s an alternative: have it build a weekly meal planner for you (and your family). Here are some things to consider:

  • What dietary preferences do you need to lay out up front? 
  • How many meals are you eating at home vs. boxing up for the office/school? 
  • What sort of cooking and ingredient limitations do you want to place on it? 

If you want to go deeper… 

  • Do you want it to ask you for your ingredients on-hand each week, prior to building a menu? 
  • Do you want to consider seasonality and availability of produce? 

There are many more places you could go to keep refining it.  

Stage 5: Use Research and Integrations

Start here if you have built at least one project to help you. The second half of this section covers system integrations, which may not be usable in your professional environment. 

Up to now, you are probably relying on either your own documents or the LLM’s training knowledge set for information. There are a huge number of options for extending what information your AI can access–starting with web search and research and extending to many other integrations as well. 

Let’s tackle web search and research first. ChatGPT, Claude, and Gemini all have the ability to search the web. For Claude, you need to enable it in the search and tools menu. It will default to on in future conversations. ChatGPT and Gemini have it on by default. You can ask for it to search the web, but if you ask for specific details on something or for information on a time-sensitive topic, it’s almost certainly going to search the web. The default behavior for ChatGPT and Gemini is to provide links/citations at the end of each paragraph. Typically, you’ll need to prompt Claude to do so if you want them. 

Research (Claude) and Deep Research (ChatGPT and Gemini) is a bit like a super-charged version of web search. Instead of performing a single search and summarizing it, the LLM engages in multi-step research: conducting multiple searches, reviewing sources, examining their citations, cross-referencing information, comparing the veracity of information, and more. It will then create a synthesized report on the topic for you, typically with citations and all. Research is powerful, and it takes a while to run–somewhere between ten and 30 minutes. You can’t interrupt the process or give it more information after it starts, so you need to be sure that you have scoped out your question well. There are also rate limits. For a paid plan, you can typically run 2-3 research requests every 5 hours on Claude. With ChatGPT, a paid plan is limited to about 10 per month. 

Another way of adding information to an LLM is with integrations. Integrations allow an LLM to pull information from another tool you use–email, Google documents, Microsoft Teams, and more. ChatGPT calls these Connections, and Gemini calls them Apps, but they are all the same. Each platform has some default options for you to choose from. Try turning on an integration like Google Drive, then asking it to summarize past documents on a topic. 

The AI can access any integration you allow it to–including multiple integrations in the same conversation. For instance, when I ask it to generate a meeting brief for me, it will search my personal CRM, my email, and my notes documents–compiling facts from all of them in its briefing. That sort of quick context-gathering is huge. 

Managing it effectively also requires some smart prompting. I, for instance, use system instructions or prompts to treat my personal CRM information as more authoritative than other sources and to generally treat more recent information as more accurate. 

AI and human in action

This is really where AI starts to overtake significant manual, lower-value work for me. Now, I can prompt an AI to generate a briefing document (a BCL!) based on past documents, email correspondence, etc. It can generate a summary of the past meetings; it can tell me what they might raise again. 

But wait… there’s one thing that the AI can’t do for me–tell me what I should talk about. I still have to do some thinking about those three issues I might want to raise and why. AI may help me shape how I approach them, but it isn’t going to replace the thinking ahead of me. This is the potential balance that AI helps bring us as diplomats: focusing on what matters the most, while taking some of the grunt work out of it.