Truth: How to effectively communicate with artificial intelligence

  • Press
Tomáš Nágel, Data scientist at Aliter Technologies.
Tomáš Nágel stojí pred svetlým pozadím2

Introduction to prompt engineering

More and more people are using artificial intelligence in the form of AI tools like ChatGPT, Claude, Gemini, or Copilot directly in the applications they use daily, whether for creating texts, emails, organizing tasks, or searching for information. However, they may have noticed that the answers of these systems are not always completely according to their expectations. Sometimes the answer is too brief, other times the large language model (LLM) does not hit the topic or answers inaccurately. This is where a new type of skill comes in, so-called prompt engineering, i.e. the art of asking the right question and guiding the model to answer our requirements as best as possible. Just as in a real interview it often depends on how we ask, so when communicating with a model, the quality of the result is often a matter of the correct input.

What is a prompt and why does it matter?

Simply put, a prompt is an input message or task that we use to explain to the large language model (LLM) what we expect from it. It is not just a simple question, a prompt can be a description of a situation, a task assignment, assignment of a task to a certain "role" or a call for creative activity. The clearer, more specific and understandable the prompt is for the model, the more useful the answer will be. In contrast to classic chatbots, where the command options are limited, a large language model has essentially infinite possibilities and therefore it is important to clearly tell it what we actually need.

Example:

You enter only "write me an email about a delayed project" to which you will receive a universal and very general answer. However, if you expand the prompt to "You are an assistant manager in an IT company. I need you to write a formal email to a colleague about a delayed project, express your understanding and request a new deadline and attach a meeting proposal," the answer will be much more precise and better applicable in practice.

Basic prompting techniques: Zero-shot, one-shot, few-shot

The first step is to choose the appropriate technique depending on how much you need to “guide” the model to the correct answer:

  • Zero-shot prompting means that you give the model only a task or question without further examples or explanations. For example: “Create an introduction to an article about a healthy lifestyle.”

  • One-shot prompting provides one specific example according to which the model is to understand the task. “This is what the previous introduction looks like: (example). Create a similar one, but on the topic of productivity at work.”

  • Few-shot prompting expands the task with multiple examples, increasing the likelihood that the model will match your style or requirement.

Example:

If you need to generate customer support responses in a specific style, add 2–3 sample responses (also with specific wording or style) to the prompts that the model can follow.

Context, role and formatting, what makes an effective prompt

Modern language models can accept a prompt that contains:

  • Role: Who is supposed to “answer”. For example, an expert, a teacher, a marketer?

  • Task: What exactly is the model supposed to do? For example, summarize, explain, create or adjust the tone.

  • Context: What are the important connections? Where can the model draw data (e.g. a specific document, file, email)?

  • Format: What output do you expect the result to be? (e.g. a table, points, email, blog, response in JSON format)

Example:

“You are an HR manager in a medium-sized company. Based on the attached document @Pravidlá_dovolenka_v2025, prepare a short internal announcement for employees, where you will explain the most important changes in the rules for taking vacation in 2025. Use a positive and motivating tone and divide the text into bullet points.”

Examples of real prompts and their modification (iteration)

One of the great advantages of language models is the ability to conduct a conversation with them, where you gradually fine-tune the result. If you are dissatisfied with the answer, you can simply modify the prompt and try again. Often, just a small change or addition to the example leads to a better result.

Example of a prompt iteration:

First prompt:

  1. “Propose a teambuilding plan for 2 days.”

  2. Is the answer too brief? You add: “Add specific activities to develop team cooperation, take into account the average age of the team (35 years) and the fact that everyone likes to play sports.”

  3. Is the format missing? “Divide the program into a table by days and times.”

This creates an iteration in which the result gradually approaches your ideas.

Advanced techniques: Chain of Thought, ReAct and the structure of answers

For more challenging tasks, you will use the so-called “Chain of Thought”. Here, you ask the model to first describe step by step how it would proceed or what questions it would answer before it starts the solution itself.

Example:

“First, describe the procedure for how you would handle a customer complaint about a product that was damaged upon delivery. Then, come up with a template for responding to the customer and suggest possible ways to compensate.”

The ReAct (Reason & Act) method combines reasoning with concrete action, for example, when the model, in addition to analyzing the problem, also suggests concrete steps that the user can take or directly processes data from an external file, email, or creates a table.

Tips for improving prompting in practice

  1. Write naturally and completely. Communicate with the model as with a colleague. Short and vague tasks like “Write a blog on the topic of effective time management.” tend to be ineffective. Try, for example: “Write a blog on the topic of effective time management for starting entrepreneurs, divide it into an introduction, main points and a conclusion, be specific and give examples.”

  2. Don’t be afraid of details and examples. If you care about style or structure, just describe it or give the model a sample paragraph that would suit you.

  3. Don’t be afraid of multiple attempts. You can try to adjust each prompt, the model is well built for iteration.

  4. Add context. If you work with company documents, enter them directly into the prompt (for example, in Google Workspace you can “tag” a specific document to the model using @filename).

  5. Clearly specify the output format. If you need a bulleted answer, a table, a formal email, or a short summary, always write it down in advance.

  6. Review the result. Always take the model’s answer as a suggestion, and if it’s an important email or public output, check for accuracy, privacy, and relevance.

What if a large language model (LLM) answers inaccurately or incorrectly?

Even the best models aren’t infallible. Sometimes they can make up information, misunderstand the intent of a question, or misinterpret the context. In those cases, it’s a good idea to edit, expand, or rephrase the prompt. If you ask, “Summarize this document for me,” and the model responds inaccurately, try, “Summarize the main points of document X, give at least 3 specific examples, and highlight recommendations at the end.”

Prompting is a skill everyone can master

Prompt engineering may seem like a technical skill at first glance, but it’s really about clear, concrete, and human communication. Anyone using AI tools, whether at work or at home, can, with a little practice and a few simple rules, significantly improve the quality and value of the answers they receive. Just experiment and, most importantly, assign tasks exactly according to your needs, just as you would assign a task to a colleague.

SOURCE: Pravda

decor

News and articles