169

Show HN: Documind – Open-source AI tool to turn documents into structured data

Documind is an open-source tool that turns documents into structured data using AI.

What it does:

- Extracts specific data from PDFs based on your custom schema - Returns clean, structured JSON that's ready to use - Works with just a PDF link + your schema definition

Just run npm install documind to get started.

From the source, Documind appears to:

1) Install tools like Ghostscript, GraphicsMagick, and LibreOffice with a JS script. 2) Convert document pages to Base64 PNGs and send them to OpenAI for data extraction. 3) Use Supabase for unclear reasons.

Some issues with this approach:

* OpenAI may retain and use your data for training, raising privacy concerns [1].

* Dependencies should be managed with Docker or package managers like Nix or Pixi, which are more robust. Example: a tool like Parsr [2] provides a Dockerized pdf-to-json solution, complete with OCR support and an HTTP api.

* GPT-4 vision seems like a costly, error-prone, and unreliable solution, not really suited for extracting data from sensitive docs like invoices, without review.

* Traditional methods (PDF parsers with OCR support) are cheaper, more reliable, and avoid retention risks for this particular use case. Although these tools do require some plumbing... probably LLMs can really help with that!

While there are plenty of tools for structured data extraction, I think there’s still room for a streamlined, all-in-one solution. This gap likely explains the abundance of closed-source commercial options tackling this very challenge.

---

1: https://platform.openai.com/docs/models#how-we-use-your-data

2: https://github.com/axa-group/Parsr

7 days agoemmanueloga_

Disappointed to see this is an exact rip of our open source tool zerox [1]. With no attribution. They also took the MIT License and changed it out for an AGPL.

If you inspect the source code, it's a verbatim copy. They literally just renamed the ZeroxOutput to DocumindOutput [2][3]

[1] https://github.com/getomni-ai/zerox

[2] https://github.com/DocumindHQ/documind/blob/main/core/src/ty...

[3] https://github.com/getomni-ai/zerox/blob/main/node-zerox/src...

7 days agothemanmaran

Are there any reputation mechanisms or github flagging systems to alert users to such scams?

It’s a pretty unethical behavior if what you describe is the full story and as a user of many open source projects how can one be aware of this type of behavior?

7 days agoalchemist1e9

Hello. I apologize that it came across this way. This was not the intention. Zerox was definitely used and I made sure to copy and include the MIT license exactly as it was inside the part of the code that uses Zerox.

If there's any additional thing I can do, please let me know so I would make all amendements immediately.

7 days agoTammilore

You took their code, did a search and replace on the product name and you're relicensed the code AGPL?

You're going to have to delete this thing and start over man.

4 days agogmerc

It appears that the MIT license was correctly included to apply to the zerox code used while the AGPL license applies to their own code. Isn’t this how it should be?

4 days agoleojaygod

For the MIT license to make sense it needs a copyright notice, I don’t actually see one in the original license. It just says “The MIT license” but then the text below references the above copyright notice, which doesn’t exist.

I think both sides here can learn from this, copyright notices are technically not required but when some text references them it is very useful. The original author should have added one. The user of the code could also have asked about the copyright. If this were to go to court having the original license not making sense could create more questions than it should.

tl;dr: add a copyright line at the top of the file when you’re using the MIT license.

5 days agodontdoxxme

If you are looking for the latest/greatest in file processing i'd recommend checking out vision language models. They generate embeddings of the images themselves (as a collection of patches) and you can see query matching displayed as a heatmap over the document. Picks up text that OCR misses. My company DataFog has an open-source demo if you want to try it out: https://github.com/DataFog/vlm-api

If you're looking for an all-in-one solution, little plug for our new platform that does the above and also allows you to create custom 'patterns' that get picked up via semantic search. Uses open-source models by default, can deploy into your internal network. www.datafog.ai. In beta now and onboarding manually. Shoot me an email if you'd like to learn more!

5 days agosidmo

That's not what [1] says, though? Quoth: "As of March 1, 2023, data sent to the OpenAI API will not be used to train or improve OpenAI models (unless you explicitly opt-in to share data with us, such as by providing feedback in the Playground). "

"Traditional methods (PDF parsers with OCR support) are cheaper, more reliable"

Not sure on the reliability - the ones I'm using all fail at structured data. You want a table extracted from a PDF, LLMs are your friend. (Recommendations welcome)

7 days agogroby_b

We found that for extracting tables, OpenAIs LLMs aren't great. What is working well for us is Docling (https://github.com/DS4SD/docling/)

7 days agoniklasd

Haven't seen Docling before, it looks great! Thanks for sharing.

6 days agoemmanueloga_

agreed, extracting tables in pdfs using any of the available openAI models has been a waste of prompting time here too.

7 days agosoci

> That's not what [1] says, though?

Documind is using https://api.openai.com/v1/chat/completions, check the docs at the end of the long API table [1]:

> * Chat Completions:

> Image inputs via the gpt-4o, gpt-4o-mini, chatgpt-4o-latest, or gpt-4-turbo models (or previously gpt-4-vision-preview) are not eligible for zero retention."

--

1: https://platform.openai.com/docs/models#how-we-use-your-data

7 days agoemmanueloga_

Thanks for pointing there!

It's still not used for training, though, and the retention period is 30 days. It's... a livable compromise for some(many) use cases.

I kind of get the abuse policy reason for image inputs. It makes sense for multi-turn conversations to require a 1h audio retention, too. I'm just incredibly puzzled why schemas for structured outputs aren't eligible for zero-retention.

6 days agogroby_b

It takes >50 seconds to generate these schemas for some pretty simple use-cases with large enums, for example. Imagine that latency added to each request...

a day agopconstantine

Gotcha, from what I could find online I think you are right. I was conflating data not under zero-retention-policy with data-for-training.

5 days agoemmanueloga_
[deleted]
7 days ago

OpenAI isn't retaining your details sent via the API for training details. Stopp.

7 days agobrianjking

OP, you've been accused of literally ripping off somebody's more popular repository and posing it as your own.

https://news.ycombinator.com/item?id=42178413

You may wanna get ahead of this because the evidence is fairly damning. Failing to even give credit to the original project is a pretty gross move.

7 days agovunderba

Hi. This was definitely not the intention.

I made sure to copy and past the MIT license in Zerox exactly as it was into the folder of the code that uses it. I also included it in the main license file as well. If there's anything I could do to make corrections please let me know so I'd change that ASAP.

7 days agoTammilore

Your initial commit makes it look like you wrote all the code. https://github.com/DocumindHQ/documind/commit/d91121739df038... This is because you copied and uploaded the code instead of forking. You could do a lot by restoring attribution. Your history would look the same as https://github.com/getomni-ai/zerox/commits/main/ and diverge from where you forked.

People are getting upset because this is not a nice thing to do. Attribution is significant. No one would care if you replaced all the names with the new ones in a fork because they would see commits that do that.

4 days agoankenyr

Hi. Thank you for pointing this out. I totally understand now that forking would have kept the commit history visible and made the attribution clearer. I have since added a direct note in the repo acknowledging that it is built on the original Zerox project and also linked back to it. If there’s anything else you’d suggest, happy to hear it. Thanks again.

3 days agoTammilore

It would be better to attribute. You can still do this by fixing the git commit history and doing a force push. It would do a lot to make people feel better.

3 days agoankenyr

Multimodal LLM are not the way to do this for a business workflow yet.

In my experience your much better of starting with a Azure Doc Intelligence or AWS Textract to first get the structure of the document (PDF). These tools are incredibly robust and do a great job with most of the common cases you can throw at it. From there you can use an LLM to interrogate and structure the data to your hearts delight.

7 days agoinfecto

> AWS Textract to first get the structure of the document (PDF). These tools are incredibly robust and do a great job with most of the common cases you can throw at it.

Do they work for Bills of Lading yet? When I tested a sample of these bills a few years back (2022 I think), the results were not good at all. But I honestly wouldn't be surprised if they'd massively improved lately.

7 days agodisgruntledphd2

Have not used in on your docs but I can say that it definitely works well with forms and forms with tables like a Bill of Lading. It costs extra but you need to turn on table extract (at least in AWS). You then can get a markdown representation of that page include table, you can of course pull out the table itself but unless its standardized you will need the middleman LLM figuring out the exact data/structure you are looking for.

7 days agoinfecto

Huh, interesting. I'll have to try again next time I need to parse stuff like this.

5 days agodisgruntledphd2

Plus one, using the exact setup to make it scale. If Azure Doc Intelligence gets too expensive, VLMs also work great

7 days agoIndieCoder

From just reading the README, the example is not valid JSON. Is that intentional?

Otherwise it seems like a prompt building tool, or am I missing something here?

7 days agobob778

Thanks for pointing this out. This was an error on my part.

I see someone opened an issue for it so will fix now.

7 days agoTammilore

Oof you’re right LOL

7 days agoassanineass

With such a system, how do you ensure that the extracted data matches the data in the source document? Run the process several times and check that the results are identical? Can it reject inputs for manual processing? Or is it intended to be always checked manually? How good is it, how many errors does it make, say per million extracted values?

7 days agodanbruc

Perhaps there's still value in the documents being transformed by this tool and someone reviewing them manually, but obviously the real value would be in reducing manual review. I don't think there's a world–for now–in which this manual review can be completely eliminated.

However, if you process, say, 1 million documents, you could sample and review a small percentage of them manually (a power calculation would help here). Assuming your random sample models the "distribution" (which may be tough to define/summarize) of the 1 million documents, you could then extrapolate your accuracy onto the larger set of documents without having to review each and every one.

7 days agoglorpsicle

You can sample the result to determine the error rate, but if you find an unacceptable level of errors, then you still have to review everything manually. On the other hand, if you use traditional techniques, pattern matching with regular expressions and things like that, then you can probably get pretty close to perfection for those cases where your patterns match and you can just reject the rest for manual processing. Maybe you could ask a language model to compare the source document and the extracted data and to indicate whether there are errors, but I am not sure if that would help, maybe what tripped up the extraction would also trip up the result evaluation.

7 days agodanbruc

Just this weekend was solving similar problem.

What I've noticed, that on scanned documents, where stamp-text and handwriting is just as important as printed text, Gemini was way better compared to chat gpt.

Of course, my prompts might have been an issue, but gemini with very brief and generic queries made significantly better results.

7 days agorkuodys

Legit question: By _removing the MIT license_ from the distribution and replacing it with the AGPL, how are you not violating the copyright and subject to a lawsuit?

The MIT license has just 2 conditions. They are pretty easy to read, and the fist one is:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

By replacing the license, you violate this very simple agreement.

4 days agoslippy

Hi. Thanks for the question. To clarify, the MIT license was never removed or swapped. The license was and still is included in the folder that contains the code from the original project. In the root of the repository, I added the AGPL license for the new code I developed and made sure to explicitly acknowledge that the code in the folder is still under the MIT license.

I’ve also added a direct note acknowledging and linking back to the zerox project.

3 days agoTammilore

Got excited about an open-source tool doing this.

Alas, i am let down. It is an open-source tool creating the prompt for the OpenAI API and i can't go and send customer data to them.

I'm aware of https://github.com/clovaai/donut so i hoped this would be more like that.

7 days agoinexcf

You can self host OpenAPI compatible models with lmstudio and the like. I've used it with https://anythingllm.com/

7 days ago_joel

Hi. I totally get the concern about sending data to OpenAI. Right now, Documind uses OpenAI's API just so people could quickly get started and see what it is like, but I’m open to adding options and contributions that would be better for privacy.

7 days agoTammilore

That sounds great.

6 days agoinexcf

I'd recommend checking out vision language models. They generate embeddings of the images themselves (as a collection of patches) and you can see query matching displayed as a heatmap over the document. Picks up text that OCR misses. I built a simple API over it if you want to try it out: https://github.com/DataFog/vlm-api

5 days agosidmo
[deleted]
7 days ago

Not sure I would want something non-deterministic in my data pipeline. Maybe if it used GenAI to _develop a ruleset_ that could then be deployed, it would be more practical.

7 days agokhaki54

Reading from the comments, some of the common questions regarding document extraction are:

* Run locally or on premise for security/privacy reasons

* Support multiple LLMs and vector DBs - plug and play

* Support customisable schemas

* Method to check/confirm accuracy with source

* Cron jobs for automation

There is Unstract that solves the above requirements.

https://github.com/Zipstack/unstract

7 days agoconstantinum

Very nice tool! Just last week, I was working on extracting information from PDFs for an automation flow I’m building. I used Unstructured (https://unstructured.io/), which supports multiple file types, not just PDFs.

However, my main issue is that I need to work with confidential client data that cannot be uploaded to a third party. Setting up the open-source, locally hosted version of Unstructured was quite cumbersome due to the numerous additional packages and installation steps required.

While I’m open to the idea of parsing content with an LLM that has vision capabilities, data safety and confidentiality are critical for many applications. I think your project would go from good to great if it would be possible to connect to Ollama and run locally,

That said, this is an excellent application! I can definitely see myself using it in other projects that don’t demand such stringent data confidentiality.”

7 days agothor-rodrigues

Thank you, I appreciate the feedback! I understand people wanting data confidentiality and I'm considering connecting Ollama for future updates!

7 days agoTammilore

Looking at the source it seems this is just a thin wrapper over OpenAI. Am I missing something?

7 days agoazinman2

I’ll have to test this against my local Python pipeline which does all this without an LLM in attendance. There are a ton of existing Python libraries which have been doing this for a long time, so let’s take a look..

7 days agovr46

Care to share the best ones for some use cases? Thanks

7 days agothegabriele

MinerU

PDFQuery

PyMuPDF (having more success with older versions, right now)

7 days agovr46

I'm not sure having statistics with fabrication try to extract text from PDF's would result in any mission-critical reliable data?

7 days agogibsonf1

Documind: Open-Source AI for Document Data Extraction

If you're dealing with unstructured data trapped in PDFs, Documind might be the tool you’ve been waiting for. It’s an open-source solution that simplifies the process of turning documents into clean, structured JSON data with the power of AI.

Key Features: 1. Customizable Data Extraction Define your own schema to extract exactly the information you need from PDFs—no unnecessary clutter.

2. Simple Input, Clean Output Just provide a PDF link and your schema definition, and it returns structured JSON data, ready to integrate into your workflows.

3. Developer-Friendly With a simple setup (`npm install documind`), you can get started right away and start automating tedious document processing tasks.

Whether you’re automating invoice processing, handling contracts, or working with any document-heavy workflows, Documind offers a lightweight, accessible solution. And since it’s open-source, you can customize it further to suit your specific needs.

Would love to hear if others in the community have tried it—how does it stack up for your use cases?

7 days agofredtalty5

I am looking for a similar service that turns any document (PNG, PDf, DocX) into JSON (preserving the field relationships). I tried with ChatGPT, but hallucinations are common. Does anything exist?

7 days agoasjfkdlf

I built a drag-and-drop document converter that extracts text into custom columns (for CSV) or keys (for JSON). You can schedule it to run at certain times and update a database as well.

I haven't had issues with hallucinations. If you're interested, my email is in my bio.

7 days agocccybernetic

This is also using OpenAI's GPT model. So the same hallucinations are probable here for PDFs.

7 days agoomk

That's a valid problem you are solving. I had similar usecase that I solved using PDF[dot]co

7 days agohirezeeshan
[deleted]
6 days ago

  const systemPrompt = `
    Convert the following PDF page to markdown.
    Return only the markdown with no explanation text. Do not include deliminators like '''markdown.
    You must include all information on the page. Do not exclude headers, footers, or subtext.
  `;
7 days agoeichi
[deleted]
7 days ago

> an interesting open source project

enthusiastically setting up a lounge chair

> OPENAI_API_KEY=your_openai_api_key

carrying it back apathetically

7 days agoavereveard

Thanks for the laugh and your feedback! I know that depending on an OpenAI isn't ideal for everyone. I'm considering ways to make it more self-contained in the future, so it’s great to hear what users are looking for.

7 days agoTammilore

litellm would be a start, then you just pass in a model string that includes the provider, and can default on openai gpts, that removes most of the effort in adapting stuff both from you and other users.

7 days agoavereveard
[deleted]
7 days ago

[dead]

7 days agoajith-joseph

[dead]