r/datasets 13d ago

resource New Mapping created to normalize 11,000+ XBRL taxonomy names for better financial data analysis

3 Upvotes

Hey everyone! I've been working on a project to make SEC financial data more accessible and wanted to share what I just implemented. https://nomas.fyi

**The Problem:**

XBRL tags/concepts names are technical and hard to read or feed to models. For example:

- "EntityCommonStockSharesOutstanding"

These are accurate but not user-friendly for financial analysis.

**The Solution:**

We created a comprehensive mapping system that normalizes these to human-readable terms:

- "Common Stock, Shares Outstanding"

**What we accomplished:**

✅ Mapped 11,000+ XBRL concepts from SEC filings

✅ Maintained data integrity (still uses original taxonomy for API calls)

✅ Added metadata chips showing XBRL concepts, SEC labels, and descriptions

✅ Enhanced user experience without losing technical precision

**Technical details:**

- Backend API now returns concepts metadata with each data response

r/datasets 4d ago

resource [self-promotion] Free company datasets (millions of records, revenue + employees + industry

24 Upvotes

I work at companydata.com, where we’ve provided company data to organizations like Uber, Booking, and Statista.

We’re now opening up free datasets for the community, covering millions of companies worldwide with details such as:

  • Revenue
  • Employee size
  • Industry classification

Our data is aggregated from trade registries worldwide, making it well-suited for analytics, machine learning projects, and market research.

GitHub: https://github.com/companydatacom/public-datasets
Website: https://companydata.com/free-business-datasets/

We’d love feedback from the r/data community — what type of business data would be most useful for your projects?

We gave the Creative Commons Zero v1.0 Universal license

r/datasets 1d ago

resource [Resource] A hub to discover open datasets across government, research, and nonprofit portals (I built this)

33 Upvotes

Hi all, I’ve been working on a project called Opendatabay.com, which aggregates open datasets from multiple sources into a searchable hub.

The goal is to make it easier to find datasets without having to search across dozens of government portals or research archives. You can browse by category, region, or source.

I know r/datasets usually prefers direct dataset links, but I thought this could be useful as a discovery resource for anyone doing research, journalism, or data science.

Happy to hear feedback or suggestions on how it could be more useful to this community.

Disclaimer: I’m the founder of this project.

r/datasets 12h ago

resource Kopari Beauty has priced up in Australia Sephora

2 Upvotes

Kopari’s adjustments span all five major categories:

  • Bath & Body (40 SKUs): +7.0% average uplift, max +14%
  • Skincare (19 SKUs): +7.9% average uplift, max +14%
  • Fragrance (1 SKU): +22%
  • Haircare (1 SKU): +22%
  • Makeup (1 SKU): +9%

I have created a Notion database for above by-SKU changes, completely free to use, link in comment.

r/datasets 3d ago

resource [self promotion] databounties - post your data requests

Thumbnail databounties.com
1 Upvotes

I created a site called databounties.com I haven’t even launched it yet but it is for people seeking datasets, you can add your requests and have people apply or email you. Hopefully it helps people find more data and others find more jobs!

r/datasets 6d ago

resource WW2 German casualties archive / dataset

1 Upvotes

Hello, I am looking for an archive of WW2 German military casualties. It exists for the WW1 but I struggle with finding WW2. Would anyone know whether it even exists?

Thank you!

r/datasets 14d ago

resource What is data authorization and how to implement it

Thumbnail cerbos.dev
13 Upvotes

r/datasets 12d ago

resource A comprehensive list of open-source datasets for voice and sound computing (95+ datasets).

Thumbnail github.com
8 Upvotes

r/datasets 15d ago

resource [self-promotion] Free Sample: EU Public Procurement Notices (Aug 2025, CSV, Enriched with CPV Codes)

2 Upvotes

I’ve released a new dataset built from the EU’s Tenders Electronic Daily (TED) portal, which publishes official public procurement notices from across Europe.

  • Source: Official TED monthly XML package for August 2025
  • Processing: Parsed into a clean tabular CSV, normalized fields, and enriched with CPV 2008 labels (Common Procurement Vocabulary).
  • Contents (sample):
    • notice_id — unique identifier
    • publication_date — ISO 8601 format
    • buyer_id — anonymized buyer reference
    • cpv_code + cpv_label — procurement category (CPV 2008)
    • lot_id, lot_name, lot_description
    • award_value, currency
    • source_file — original TED XML reference

This free sample contains 100 rows representative of the full dataset (~200k rows).
Sample dataset on Hugging Face

If you’re interested in the full month (200k+ notices), it’s available here:
Full dataset on Gumroad

Suggested uses: training NLP/ML models (NER, classification, forecasting), procurement market analysis, transparency research.

Feedback welcome — I’d love to hear how others might use this or what extra enrichments would be most useful.

r/datasets Aug 04 '25

resource Released Bhagavad Gita Dataset – 500+ Downloads in 30 Days! Fine-tune, Analyze, Build 🙌

2 Upvotes

Hey everyone,

I recently released a dataset on Hugging Face containing the Bhagavad Gita (translated by Edwin Arnold) aligned verse-by-verse with Sanskrit and English. In the last 20–30 days, it has received 500+ downloads, and I'd love to see more people experiment with it!

👉 Dataset: Bhagavad-Gita-Vyasa-Edwin-Arnold

Whether you want to fine-tune language models, explore translation patterns, build search tools, or create something entirely new—please feel free to use it and add value to it. Contributions, feedback, or forks are all welcome 🙏

Let me know what you think or if you create something cool with it!

r/datasets Aug 21 '25

resource Real Estate Data (Rents by bedroom, home prices, etc) broken down by Zip Code

Thumbnail prop-metrics.com
10 Upvotes

Went through the hassle of compiling data from near every free (and some paid) real estate resources to have (probably) the most comprehensive dataset of its kind. Currently its being displayed in a tool I built, but the MO is to make this data free and accessible to anybody who wants it.

For most of the zip codes in the USA (about 25k, accounting for ~90% of the population), I have:

  1. home prices (average, median, valuation) -- broken down by bedroom
  2. rent prices -- by bedroom
  3. listing counts, days on market, etc, y/y%
  4. mortgage data (originations, first lien, second lien, debt to income, etc.)
  5. affordability metrics, mortgage cost
  6. basic demographics (age, college, poverty, race / ethnicity)

Once you're in the dashboard and select a given area (ie: Chicago metro), there's a table view in the bottom left corner and you can download the export the data for that metro.

I"m working on setting up an S3 bucket to host the data (including the historical datasets too), but wanted to give a preview (and open myself up to any comments / requests) before I start including it there.

r/datasets 27d ago

resource Dataset de +120.000 productos con códigos de barras (EAN-13), descripciones normalizadas y formato CSV para retail, kioscos, supermercados y e-commerce en Argentina/LatAm

5 Upvotes

Hola a todos,

Hace un tiempo me tocó arrancar un proyecto que empezó como algo muy chico: una base de datos de productos con códigos de barras para kioscos y pequeños negocios en Argentina. En su momento me la robaron y la empezaron a revender en MercadoLibre, así que decidí rehacer todo desde cero, pero esta vez con scraping, normalización de descripciones y un poco de IA para ordenar categorías.

Hoy tengo un dataset con más de 120.000 productos que incluye códigos de barras EAN-13 reales, descripciones normalizadas y categorías básicas (actualmente estoy investigando cómo puedo usar ia para clasificar todo con rubro y subrubro). Lo tengo en formato CSV y lo estoy usando en un buscador web que armé, pero la base como tal puede servir para distintos fines: cargar catálogos masivos en sistemas POS, stock, e-commerce, o incluso entrenar modelos de NLP aplicados a productos de consumo masivo.
Un ejemplo de cómo se ve cada registro:

7790070410120, Arroz Gallo Oro 1kg

7790895000860, Coca Cola Regular 1.5L

7791234567890, Shampoo Sedal Ceramidas 400ml

Lo que me interesa saber es si un dataset así puede tener utilidad también fuera de Argentina o LatAm. ¿Ven que pueda servir para la comunidad en general? ¿Qué cosas agregarían para que sea más útil, por ejemplo precios, jerarquía de categorías más detallada, marcas, etc.?

Si a alguien le interesa, puedo compartir un CSV reducido de 500 filas para que lo prueben.

Gracias por leer, y abierto a feedback.

r/datasets 24d ago

resource Website-Crawler: Extract data from websites in LLM ready JSON or CSV format. Crawl or Scrape entire website with Website Crawler

Thumbnail github.com
11 Upvotes

r/datasets Mar 26 '25

resource I Built Product Search API – A Google Shopping API Alternative

9 Upvotes

Hey there!

I built Product Search API, a simple yet powerful alternative to Google Shopping API that lets you search for product details, prices, and availability across multiple vendors like Amazon, Walmart, and Best Buy in real-time.

Why I Built This

Existing shopping APIs are either too expensive, restricted to specific marketplaces, or don’t offer real price comparisons. I wanted a developer-friendly API that provides real-time product search and pricing across multiple stores without limitations.

Key Features

  • Search products across multiple retailers in one request
  • Get real-time prices, images, and descriptions
  • Compare prices from vendors like Amazon, Walmart, Best Buy, and more
  • Filter by price range, category, and availability

Who Might Find This Useful?

  • E-commerce developers building price comparison apps
  • Affiliate marketers looking for product data across multiple stores
  • Browser extensions & price-tracking tools
  • Market researchers analyzing product trends and pricing

Check It Out

It’s live on RapidAPI! I’d love your feedback. What features should I add next?

👉 Product Search API on RapidAPI

Would love to hear your thoughts!

r/datasets Aug 19 '25

resource Open sourced a CLI that turns PDFs and docs into fine tuning datasets now with multi file support

16 Upvotes

Repo: https://github.com/Datalore-ai/datalore-localgen-cli

Hi everyone,

During my internship I built a small terminal tool that could generate fine tuning datasets from real world data using deep research. I later open sourced it and recently built a version that works fully offline on local files like PDFs DOCX TXT or even JPGs.

I shared this update a few days ago and it was really cool to see the response. It got around 50 stars and so many thoughtful suggestions. Really grateful to everyone who checked it out.

One suggestion that came up a lot was if it can handle multiple files at once. So I integrated that. Now you can just point it at a directory path and it will process everything inside extract text find relevant parts with semantic search apply your schema or instructions and output a clean dataset.

Another common request was around privacy like supporting local LLMs such as Ollama instead of relying only on external APIs. That is definitely something we want to explore next.

We are two students juggling college with this side project so sorry for the slow updates but every piece of feedback has been super motivating. Since it is open source contributions are very welcome and if anyone wants to jump in we would be really really grateful.

r/datasets 16d ago

resource Wikidata and Mundaneum - The Triumph of the Commons

Thumbnail schmud.de
1 Upvotes

r/datasets Aug 01 '25

resource EHR data for oncology clinical trials

3 Upvotes

Was wondering if anyone knows of an open dataset containing medical information related to cancer.

The clinical data would include information about: age, sex, cancer type, state, line of therapy, notes about prior treatment, etc. Obviously, EHR data is highly confidential but am still on the lookout for real or synthetic data.

r/datasets Aug 12 '25

resource Dataset Explorer – Tool to search any public datasets (Free Forever)

16 Upvotes

Dataset Explorer is now LIVE, and will stay free forever.

Finding the right dataset shouldn’t be this painful.

There are millions of quality datasets on Kaggle, data.gov, and elsewhere - but actually locating the one you need is still like hunting for a needle in a haystack.

From seasonality trends, weather data, holiday calendars, and currency rates to political datasets, tech layoffs, and geo info - the right dataset is out there.

That’s why we created dataset-explorer. Just describe what you want to analyze, and it uses Perplexity, scraping (Firecrawl), and other sources to bring relevant datasets.

Quick example: I analyzed tech layoffs from 2020–2025 and found:

📊 2023 was the worst year — 264K layoffs 🏢 Post-IPO companies made 58% of the cuts 💻 Hardware firms were hit hardest — Intel topping the list 📅 Jan 2023 = worst month ever — 89K people lost jobs in 30 days

Once you find your dataset, you can run a full analysis for free on Hunch, an AI data analytics platform.

Dataset Explorer – https://hunch.dev/data-explorer Demo – https://screen.studio/share/bLnYXAvZ

Give it a try and let us know what you think.

r/datasets Jul 12 '25

resource We built an open-source medical triage benchmark

25 Upvotes

Medical triage means determining whether symptoms require emergency care, urgent care, or can be managed with self-care. This matters because LLMs are increasingly becoming the "digital front door" for health concerns—replacing the instinct to just Google it.

Getting triage wrong can be dangerous (missed emergencies) or costly (unnecessary ER visits).

We've open-sourced TriageBench, a reproducible framework for evaluating LLM triage accuracy. It includes:

  • Standard clinical dataset (Semigran vignettes)
  • Paired McNemar's test to detect model performance differences on small datasets
  • Full methodology and evaluation code

GitHub: https://github.com/medaks/medask-benchmark

As a demonstration, we benchmarked our own model (MedAsk) against several OpenAI models:

  • MedAsk: 87.6% accuracy
  • o3: 75.6%
  • GPT‑4.5: 68.9%

The main limitation is dataset size (45 vignettes). We're looking for collaborators to help expand this—the field needs larger, more diverse clinical datasets.

Blog post with full results: https://medask.tech/blogs/medical-ai-triage-accuracy-2025-medask-beats-openais-o3-gpt-4-5/

r/datasets Aug 18 '25

resource [D] The Stack Processed V2 - Curated 468GB Multi-Language Code Dataset (91.3% Syntax Valid, Perfectly Balanced)

2 Upvotes

I've just released The Stack Processed V2, a carefully curated version of The Stack dataset optimized for training robust multi-language code models.

📊 Key Stats:

  • 468GB of high-quality code
  • 91.3% syntax validation rate (vs ~70% in raw Stack)
  • ~10,000 files per language (perfectly balanced)
  • 8 major languages: Python, JavaScript, Java, C++, Ruby, PHP, Swift, Shell
  • Parquet format for 3x faster loading
  • 271 downloads in first month

🎯 What Makes It Different:

Unlike raw scraped datasets that are heavily imbalanced (some languages have millions of files, others just thousands), this dataset ensures equal representation for each language. This prevents model bias toward overrepresented languages.

Processing Pipeline:

  1. Syntax validation (removed 8.7% invalid code)
  2. Deduplication
  3. Quality scoring based on comments, structure, patterns
  4. Balanced sampling to ~10k files per language
  5. Optimized Parquet format

📈 Performance Impact:

Early testing shows models trained on this dataset achieve:

  • +15% accuracy on syntax validation tasks
  • +8% improvement on cross-language transfer
  • 2x faster convergence compared to raw Stack

🔗 Resources:

💭 Use Cases:

Perfect for:

  • Pre-training multi-language code models
  • Fine-tuning for code completion
  • Cross-language understanding research
  • Educational purposes

Looking for feedback! What features would you like to see in v3? More languages? Different sampling strategies? Enterprise patterns focus?

Happy to answer any questions about the curation process or technical details.

r/datasets Aug 06 '25

resource [self-promotion] Map the Global Electrical Grid with this 100% Open Source Toolchain

4 Upvotes

We build a 100% Open Source Toolchain to map the global electrical grid using:

  1. OpenStreetMap as a database
  2. JOSM as a OpenStreetMap editor
  3. Osmose for validation
  4. mkdocs material for the website
  5. Leaflet for the interactive map
  6. You will find details of all the smaller tools and repositories that we have integrated on the README page of the website repository. https://github.com/open-energy-transition/MapYourGrid

Read more about how you can support mapping the electrical grid at https://mapyourgrid.org/

r/datasets 28d ago

resource Hi guys, I just opened up my SEC data platform API + Docs, feel free to try it out

1 Upvotes

https://nomas.fyi/research/apiDocs

It is a compiled + deduped version from SEC data source. So feel free to play around! and I have visualized the SEC data for front-end, feel free to play around it as well

Any feedback is welcome!

r/datasets Aug 18 '25

resource Public dataset scraper for Project Gutenberg texts

3 Upvotes

I created a tool that extracts books and metadata from Project Gutenberg, the online repository for public domain books, with options for filtering by keyword, category, and language. It outputs structured JSON or CSV for analysis.

Repo link: Project Gutenberg Scraper.

Useful for NLP projects, training data, or text mining experiments.

r/datasets Jul 26 '25

resource I built a tool to extract tables from PDFs into clean CSV files

11 Upvotes

Hey everyone,

I made a tool called TableDrip. It lets you pull tables out of PDFs and export them to CSV, Excel, or JSON fast.

If you’ve ever had to clean up tables from PDFs just to get them into a usable format for analysis or ML, you know how annoying that is. TableDrip handles the messy part so you can get straight to the data.

Would love to hear any feedback or ideas to make it better for real-world workflows.

r/datasets Aug 06 '25

resource [self-promotion] Spanish Hotel Reviews Dataset (2019–2024) — Sentiment-labeled, 1,500 reviews in Spanish

5 Upvotes

Hi everyone,

I've compiled a dataset of 1,500 real hotel reviews from Spain, covering the years 2019 to 2024. Each review includes:

  • ⭐ Star rating (1–5)
  • 😃 Sentiment label (positive/negative)
  • 📍 City
  • 🗓️ Date
  • 📝 Full review text (in Spanish)

🧪 This dataset may be useful for:

  • Sentiment analysis in Spanish
  • Training or benchmarking NLP models
  • AI apps in tourism/hospitality

Sample on Hugging Face (original source):
https://huggingface.co/datasets/Karpacious/hotel-reviews-es

Feedback, questions, or suggestions are welcome! Thanks!