📢 Gate Square Exclusive: #PUBLIC Creative Contest# Is Now Live!
Join Gate Launchpool Round 297 — PublicAI (PUBLIC) and share your post on Gate Square for a chance to win from a 4,000 $PUBLIC prize pool
🎨 Event Period
Aug 18, 2025, 10:00 – Aug 22, 2025, 16:00 (UTC)
📌 How to Participate
Post original content on Gate Square related to PublicAI (PUBLIC) or the ongoing Launchpool event
Content must be at least 100 words (analysis, tutorials, creative graphics, reviews, etc.)
Add hashtag: #PUBLIC Creative Contest#
Include screenshots of your Launchpool participation (e.g., staking record, reward
How to train AI to work for you? The Secret of the Prompt Word (_)
Source: Matrix Partners
Some time ago, Zhang Ying shared some judgments on AI trends and 7 suggestions for AI entrepreneurship at the AI Conference of Chaos Academy. He mentioned in the first suggestion:
Today we will talk about "Engineering" a little bit. At the beginning of the year, various Midjourney dictionaries became popular on the Internet, such as:
glittering neon lights
high angle view high angle view
futuristic polished surfaces
Classical style, 18th-19th century Vintage
Ukiyo-e traditional Japanese ukiyoe
……
Until recently, the great gods have played tricks on the QR code again. After writing the prompt words of style and elements, they can produce some unique "AI art QR codes":
In today's article, we start with two cases, the first is an example of plain text, and the second is an example that requires code to implement, to introduce some important principles and techniques of Engineering:
Of course, there is no essential difference between the plain text of the prompt word and writing code. The purpose of using code is to save tokens and make the output more stable and accurate, because Chinese will still take up more tokens. When you need to call APIs on a large scale Sometimes, the cost will rise sharply. Plain text prompts can also be very complex. For example, I have seen more than 600 lines of text prompts, and they are composed of multiple groups of modules.
1A popular model of copywriting (plain text)
Let's briefly introduce what is "Engineering" (prompt engineering)? Usually, it means converting the questions you want to ask into input in a specific format, and using predefined templates, rules, and algorithms for processing, so that AI can better understand the task and give corresponding answers. Allow AI to accurately understand tasks to the greatest extent, reduce misunderstandings and errors caused by unclear language expressions, and enable it to perform specific tasks accurately and reliably.
Let's move on to this text example. In many scenarios that require copywriting, such as e-commerce pages, Xiaohongshu planting copywriting, forum posts, etc., if you directly let AI write, the effect may not be good,**but through these "five steps", you can make Improved output quality and more stable results. **
The first step, "feed" the copywriting you think is good to AI, and to make it clear for AI to learn this copywriting, we need to clearly say to AI: "Next, I will send you a copywriting study, the purpose is Create a copywriting model for popular models. After you have finished learning, you only need to reply: I have learned. The copywriting is as follows: "
**You can also continue to fine-tune this model, such as asking AI to write more creatively, or "feeding" AI with an initial copy that better suits your needs, and you can use this training idea and framework to train more suitable for you article model. **
Finally, the link of this training model is as follows, interested friends can try:
Based on this training idea, netizens have also developed many interesting scenarios, you can try:
2** An ordering robot model (implemented by code)**
How to use ChatGPT to build a meal ordering robot? We can do it through Engineering.
This example of a meal ordering robot comes from the DeepLearning.ai course. DeepLearning.ai founder Wu Enda teamed up with OpenAI developer Iza Fulford to launch an Engineering course for developers. Wu Enda is a star professor in the field of AI. He is a visiting professor in the Department of Computer Science and Electrical Engineering at Stanford University. He was once the director of the Stanford Artificial Intelligence Laboratory.
But since this is a commercial scenario, we need accurate and stable responses from ChatGPT. At this time, it is more appropriate to use computer language than plain text, so we need to deploy the OpenAI Python package first.
**The application scenario of this ordering robot is a pizzeria. The functions realized are: first greet customers, then collect orders, and ask whether they need to pick up or deliver. **In the case of delivery, the ordering bot can ask for the address. Finally, the ordering bot collects the payment.
In the actual conversation, the ordering robot will generate a response according to the user's input and the system's instructions:
User says: "Hi, I'd like to order a pizza"
The ordering bot will respond: "Great, what kind of pizza would you like to order? We have pepperoni, cheese and eggplant pizzas, what are their prices"
Throughout the conversation, the ordering robot will generate a response based on the user's input and the system's instructions, so as to make the conversation more natural and smooth, and at the same time avoid inserting obvious prompt information into the conversation.
First, we define a "helper function" that collects user messages to save us from typing them manually. This function will collect hints from the UI and append them to a list called context, which is then used to call the model every time, including system information and menus.
Since the prompt word already contains the price, it will be directly listed here. The user might reply: I like a medium eggplant pizza. So the user and the ordering robot can continue this conversation forever, including whether they want to deliver, whether they need additional ingredients, and double check whether they need other things (such as water? Or chips?)...
Finally, we ask the ordering bot to create a conversation-based summary that can be sent to the order system:
Since this article is not full of codes, I believe that you are not willing to see dense codes on your mobile phone, so we only put the main points here. If you want to learn more, you can watch this detailed instructional video:
3 Some key principles and techniques
Finally, let's summarize the two key principles and the current limitations of large language models. You need to know where the current lower limit of large language model capabilities is, which is more helpful for finding specific application scenarios.
**The two principles are: write clear and specific instructions, and give the model enough time to think. **
This principle emphasizes that when using language models such as ChatGPT, clear and specific instructions need to be given. Clarity does not mean shortness. Too short prompt words often make the model fall into guesswork. There are 4 specific strategies under this principle:
**1) Use delimiters to clearly delimit the different parts of the input. **
Delimiters can be backticks, quotation marks, etc. The core idea is to clearly identify different parts of the input, which helps the model understand and process the output. The delimiter is to let the model know clearly that this is an independent part, which can effectively avoid "hint injection". The so-called prompt injection refers to some conflicting instructions that may be generated by mistake when some users add new input, resulting in incorrect results.
In the prompt word, you can specify: Generate three fictional book titles, along with their authors and genres, provided in the following format: Book ID, Title, Author, and Genre.
If the task has assumptions and these conditions are not necessarily met, the model can be told to check these assumptions first, indicate if they are not met, and stop the task from directly feeding back to avoid unexpected wrong results.
Like in the following example: we're going to copy a paragraph that describes how to make tea, and then copy the prompt, which says if the text contains a series of directions, rewrite those directions in the following format, followed by the step-by-step instructions. If the text does not contain a series of directions, simply write "No steps provided".
This strategy is simple but important, that is, we can include a correct example in the prompt word. For example, we ask the model to answer in a consistent tone, the input task is "answer questions in a consistent style", and then provide an example of a conversation between a child and a grandfather, the child said: "Teach me what patience is", the grandfather Answer by analogy.
Now we ask the model to answer with a consistent tone, when the next question is: "teach me what is resilience". Since the model already has this few examples, it will answer the next task with a similar tone, it will answer: "Resilience is like a tree that can be bent by the wind but never breaks".
**Principle 2: Give the model enough time to think. **
If the model makes a reasoning error because it is eager to draw a wrong conclusion, it should try to reconstruct the prompt words. The core idea is to require the model to perform a series of related reasoning before providing the final answer. There are 2 strategies under this principle:
Explicitly stating the steps required to complete a task can help the model better understand the task and produce more accurate output.
2) Instruct the model (before rushing to conclusions) to formulate its own solution:
Explicitly instructing the model to reason about the solution on its own before reaching a conclusion can help the model to complete the task more accurately.
**Additional Discussion: How to view the limitations of the model? **
At present, the biggest problem with the commercialization of large language models is "illusion". Because during its training process, the large model is exposed to a large amount of knowledge, but it does not perfectly remember the information it has seen, and it is not clear where the knowledge boundary is. This means that the big model may try to answer all the questions, sometimes making up things that sound plausible but are not.
A strategy to reduce hallucinations is to first ask the large language model to find all relevant parts from the text, then ask it to use those quotations to answer the question, and trace the answer back to the source document. This strategy can reduce the occurrence of hallucinations.
Big language models like GPT-3.5 and GPT-4 understand everything, but it is precisely because it is too broad that if you don't give it a hint, the answer you get is often a car wheel.
The importance of (prompt) at this time is self-evident, and it is not just a word, or a simple sentence, if you want to achieve more complex functions, you also need more complex prompt words.
It also requires everyone to brainstorm and come up with more novel or more suitable gameplays. Its "exclusiveness" is also very strong. For example, the previous award-winning "Space Opera House", the author claimed that he spent more than 80 hours and more than 900 iterations to come up with this work, and has refused to share what prompts Midjourney used so far.
We don’t need to start from scratch now, there are many good communities at home and abroad, everyone is exchanging experience on the use of prompt words, and even lists the current popular prompt words, which we will list in the appendix at the end of the article.
** From the perspective of entrepreneurship/investment, everyone is now discussing where the opportunities in the application layer are. I often go to these popular prompt word websites to see, and maybe I can find some innovations in application scenarios from those newly released popular prompt words inspiration. No matter how much you have seen, it is better to give it a try. **
Addendum: Some Prompt Word Discussion Sites
(A popular prompt word website overseas, which can be sorted by popularity and covers a complete range of scenarios.)
(The Writings subreddit on Reddit is a very active community where users post and respond to various writing prompts.)
(The 100 Best ChatGPT Prompt Words to Power Your Workflow.)
(It can be sorted by popularity, and the scenarios covered are very complete, from writing, programming to finance, medical care, etc.)
(Xiaohongshu copywriting ranks the highest, and it has a more intuitive display of prompt words.)