How Can We Help?

SplashLogicAI output is not perfect! You have the final say.

You are here:

In an attempt to be a helpful self-serve GTM assistant, SplashLogicAI can occasionally produce responses or recommendations that are incorrect or misleading.

This is known as “hallucinating” information, and it’s a byproduct of some of the current limitations of frontier Generative AI models – which underly parts of SplashLogicAI. For example, in some subject areas, these various models might not have been trained on the most-up-to-date information and may get confused when prompted about current events. Another example is that SplashLogicAI can display quotes that may look authoritative or sound convincing, but are not grounded in fact. In other words, SplashLogicAI can present things that might look correct but are very mistaken.

Users should not rely on SplashLogicAI as a singular source of truth and should carefully scrutinize any content produced and/or high-stakes advice given by SplashLogicAI.

When working with web search results, users should review SplashLogicAI’s cited sources. Original websites may contain important context or details not included in SplashLogicAI’s synthesis. Additionally, the quality of SplashLogicAI’s responses depends on the underlying sources it references, so checking original content helps you identify any information that might be misinterpreted without the full context.

Just remember, SplashLogicAI is simply your assistant. You have the final say over it’s output.

You can use the thumbs down button to let us know if a particular response was unhelpful, or submit a ticket with your thoughts or suggestions via the link at the top right of your screen.