While all three models generally do a pretty good job at providing accurate information, it’s important to remember that how you phrase your prompts can really influence their responses. Even a slight change in wording can lead to unexpected, inaccurate, or even harmful outputs. This prompt injection threat, along with several other vulnerabilities of large language models, is nicely explained in this article by a seasoned application security leader.
The Good, the Bad, and the AI: A Deep Dive into Gemini, GPT-4, and GPT-4o Comparison