Using Datasets on the Platform

  • I have my own data — how do I upload it to the platform and where does it get stored?
  • What file formats can I use for my datasets, like CSV, Excel, or PDF?
  • If I have lots of datasets, is there a way to organize them so I don’t get lost?
  • When my agent is running, how does it actually pull information from the datasets I uploaded?
  • Can I control who else can see or use my datasets?
  • Are there limits on dataset size that I should know about?

Hello! I’m happy to help with your questions about using datasets on the AgenticFlow platform. Here are the answers based on the information I could find in the documentation:

  • How to upload data and where is it stored?
  • You can upload your data by going to the "Data" page and clicking the "Create table" button. From there, you can choose to upload a file, import from a website, or use an integration. All your uploaded data is stored centrally in a NoSQL database, making it easily accessible for your AI agents and workflows.
    
  • What file formats can I use?
  • The platform supports CSV, Excel, PDF, and audio files. For PDF and audio files, the text will be automatically extracted and stored.
    
  • How does my agent access the data?
  • When you build a workflow, you can include your datasets as "knowledge". In an LLM action within that workflow, you can then use the `{{ knowledge }}` variable in your prompt to access and use the information from your datasets.
    

Unfortunately, I was unable to find specific information in the documentation about organizing datasets, controlling access permissions, or dataset size limits. I would recommend reaching out to the AgenticFlow support team for more details on these topics.

I hope this helps you get started with using your data on the platform!