Docy

FAQs

Estimated reading: 12 minutes

Common questions you may have been afraid to ask:

At our company, we provide a powerful generative AI model called Kaila, which can be used to generate text, speech, or other types of data. We are committed to providing you with the best possible tools to achieve your goals, and Kaila is an excellent resource for this purpose.

However, it is important to note that Kaila is an advanced AI system that can be trained on a wide variety of data, but the results are based on the data used to train it. Kaila is a tool, but the user is ultimately responsible for the outcomes generated by it, whether intended or unintended.

We do not guarantee the accuracy, completeness, suitability, or availability of the information generated by Kaila, and we do not assume any liability for any errors or omissions in the information generated by Kaila.

We encourage our users to use Kaila in a responsible and ethical way, following best practices and being aware of the limitations and biases of the model. And it is the user responsibility to check and verify the results from Kaila, especially if it is used for decision-making or other important uses.

It is important to note that the data and outcomes generated by Kaila are only intended for informational and research purposes and are not intended for use in any application or service that would create any legal or financial obligations.

In summary, while Kaila is a powerful tool that can generate a wide range of data, it is not intended to replace human judgement and our company will not be held responsible for the outcomes generated by it. We encourage our clients to use Kaila responsibly and check the results generated by it.

Text generative AI is a kind of computer program that can write sentences and paragraphs that sound like they were written by a human. It works by looking at lots of examples of text written by people, and then figuring out the patterns and rules of how the words and sentences are put together.

For example, imagine that you are making a recipe book, and you have lots of recipes written by different people. You can use text generative AI to automatically generate new recipes that are similar to the ones you already have, but written in a different way.

Another example, if you have lots of children’s stories and you want to generate a new one, text generative AI can help you with that by using the patterns it learned from the existing stories and write a new one.

To explain it like you are 5, Imagine you have a toy box and it has a bunch of different blocks in it. Each block is a word, and you can build a sentence by stacking different blocks together. Text generative AI is like a toy robot, who can also stack blocks together to make sentences, but it can look at a lot more blocks than you can, and it can stack them together in different ways to make new sentences.

Basically yes. But don’t panic and don’t be afraid. Word is not perfect place, so machines like Kaila is not perfect too. We’re working hard to improve our models to deliver best user experience but keep on your mind that:

  • Generative AI models, such as GPT (Generative Pre-trained Transformer), have a number of limitations and risks that users should be aware of.
  • One of the main limitations is that the generated text can often be of lower quality than text written by humans. The text may contain grammatical errors, may not be coherent and sometimes it may not make sense.
  • Another limitation is that GPTs can perpetuate biases that are present in the training data. For example, if a GPT model is trained on a dataset that contains racist or sexist language, it may generate text that includes similar biases.
  • A risk associated with GPTs is the possibility of them being used to create synthetic or false information. Because GPT can generate text that sounds like it was written by a human, it could be used to create fake news, impersonation of others, and other forms of deception.
  • Additionally, GPT models are able to generate malicious text, such as phishing scams, false medical advice, and propaganda.
  • Overall, while GPTs are powerful models that can generate text quickly and easily, they should be used with caution and always with a human monitoring and control, being aware of the possible risks and limitations to avoid malicious use.

Note: GPT in this context doesn’t means OpenAI’s GPT3. GPT is an abbreviation for “Generative pre-trained Transformer” which is a type of neural model as well as BERT, PaLM and others.

In the context of text generative AI models, bias refers to a systematic preference or prejudice that the model may learn and exhibit in the generated text. This bias can be introduced by the dataset that was used to train the model, as the model learns patterns and rules from the data it is trained on. If the training data contains certain biases, the model can learn and reproduce those biases in the generated text.

For example, if a text generative model is trained on a dataset that contains a disproportionate number of articles written by men, it may be more likely to generate text that exhibits a bias towards men. Similarly, if a model is trained on a dataset that contains a disproportionate number of articles written in a specific language, it may be more likely to generate text that exhibits a bias towards that language.

In summary, bias in text generative AI models refers to systematic preferences or prejudices that the model may learn and exhibit in the generated text, this bias is often introduced by the dataset used to train the model. It is important to be aware of these biases and actively work to reduce them in order to prevent the generated text from being harmful or inappropriate.

  • We use state-of-the-art security protocols to ensure that all customer data is transmitted and stored securely on our servers in Frankfurt.

  • Our servers in Frankfurt are continuously monitored and maintained by a team of experienced IT professionals to ensure that your data remains safe and secure at all times.

  • We comply with all relevant data protection regulations, including the General Data Protection Regulation (GDPR), to ensure that your data is handled in a responsible and transparent manner.

  • We have strict access controls in place to ensure that only authorized personnel have access to customer data stored on our servers in Frankfurt.

  • Regular backups are taken on our servers in Frankfurt to ensure that all customer data is recoverable in the event of any unforeseen incident.

  • We conduct regular security audits and penetration testing to identify and remediate any potential vulnerabilities in our systems and networks.

  • Our data center is physically protected with security cameras, biometric access control and 24/7 security personnel.

Your data is stored in vector database. You might think: “What a hell is vector database?”, so there is some basic explanation: 

In the context of training AI models, a vector database can be used to store and manage large amounts of data that is used to train machine learning algorithms. Vector databases can be used to store the input data, which are often represented as high-dimensional vectors, in which each element corresponds to a feature of the input. These features can be numerical values, categorical values, images or other types of data. The input data is often pre-processed, cleaned, and transformed before being fed into the machine learning model.

The output data, also known as labels or targets, can also be stored in a vector database as vector of values, indicating the correct outcome or class to which the input corresponds.

In summary, in the context of training AI models, a vector database can be used to store and manage large amounts of input and output data in a structured and organized way, making it easy for data scientists and engineers to access the data they need to train and evaluate the models.

We understand that the security of your data is of the utmost importance, and we want to assure you that we take every measure to protect it.

Our infrastructure is designed with security in mind, and we use industry-leading protocols and methods to safeguard your data. This includes encryption of data in transit and at rest, firewalls and network segmentation, and intrusion detection and prevention systems.

We also conduct regular security assessments and penetration testing to identify and remediate any potential vulnerabilities in our systems. We also have implemented multi-factor authentication and access controls, to ensure that only authorized personnel have access to customer data.

Our team is well-versed in data protection and security best practices, and we are fully compliant with all relevant data protection regulations, including the General Data Protection Regulation (GDPR), to ensure that your data is handled in a responsible and transparent manner.

In addition, we have stringent incident response and disaster recovery plans in place, so that in the event of any security incident, we can respond quickly and effectively to minimize any potential impact.

Overall, rest assured that your data is in safe hands with us, we make security a top priority and continuously monitor and improve our security measures to ensure the safety of your data.

If your Kaila Agent is generating meaningless answers it is probably because the subject of the question you or your customers have asked is very far removed from the actual topic on which the Kaila Agent is trained.

It is possible that the question you or your customers have asked is on topic but lacks sufficient context or factual descriptions in your data. As a result, our AI may hallucinate in such cases or the responses are meaningless.

How to solve it?

Please log in to your Kaila Studio and try to find the turns that are meaningless from your point of view in the Answers tab. Then try to use them to add the necessary factual descriptions or context in the Kaila Agents tab.

Kaila is currently in non-public beta and is completely free. In the future we plan to release affordable pricing plans for all types of customers, with different scope of needs

And what happens when you’re done testing privete beta?
As soon as we release our full version of Kaila, we will let you know well in advance. And don’t worry, we won’t automatically convert you to a paid plan because we will certainly offer a Free plan forever.

We designed the private beta so that you could test it thoroughly and we could get valuable feedback from you. However, there are some limitations so that we can offer all our customers the best possible user experience:

  • The number of Kaila agents is limited to a maximum of 5 active agents.
  • Integrations are limited to only a web-widget (.js snippet on your website).
  • The number of these widgets is limited to 1.
  • The maximum number of invited colleagues to Kaila Studio is 3 colleagues.
We can delete your data as well as all integrations, as well as use it for fine-tuning our models. However, we do not access the data in text form but in the form of embeddings which is essentially a numerical interpretation of the text that is not reversibly decodable.

Currently, our AI models do not offer an inference API for communication with 3rd party services. However, we understand the importance of this feature and it is on our roadmap to be implemented in February 2023. We apologize for any inconvenience this may cause and will be sure to update our customers as soon as the feature is available.

At the moment, our platform does not offer any specific integrations or connectors. However, we understand the importance of being able to integrate with popular tools and platforms, and we have plans to integrate with Slack, Confluence, Github, Google Docs, Sharepoint, MS Teams as data sources. We are working on developing these integrations and will make them available as soon as they are ready.