This guide shows Acquira teammates how to get access to the Deal Flow Matcher APIs, install the workflow in Codex, and generate internal reporting such as how many deals we scraped from Florida this week.
To generate reports from Deal Flow Matcher data, each colleague needs access to the same internal project that powers the automation. The project already contains the scripts, schemas, and working credential pattern.
| # | Name | Description |
|---|---|---|
| 1 | GitHub repo access | They must be invited to the private Deal Flow Matcher repository so they can clone the project and read the internal docs. |
| 2 | Credential access | They need the internal credential file or the approved replacement values in a local .env. Do not move secrets into public docs or public repos. |
| 3 | Codex workspace | They should open the cloned project in Codex so Codex can run the reporting scripts against the local files and APIs. |
| 4 | Python and Git | Python 3.9+ and Git are sufficient for the core reporting flow. |
The safest way to share access is to add teammates to the private repository and let them use the approved internal credential file there. Avoid copying live API values into Slack, public docs, or public Cloudflare pages.
Use this workflow for any teammate who should be able to generate ad hoc reports in Codex.
git clone https://github.com/andi-c-ops/deal-flow-matcher.git
cd deal-flow-matcher
cp credentials.env .env
# Optional but helpful for rendered docs
pip3 install markdown
After the repo is cloned, the teammate can open that folder in Codex. The project already includes:
| # | Name | Description |
|---|---|---|
| 1 | scripts/fetch_airtable_deals.py |
Pulls the Airtable deal records into normalized JSON. |
| 2 | scripts/enrich_deals.py |
Infers industries from titles, which is useful for reporting by industry. |
| 3 | docs/deal-flow-matcher-documentation.md |
Full workflow explanation for installation, cron, data schemas, and known issues. |
| 4 | skill/SKILL.md |
Prompting guidance for using the project as a Codex-accessible workflow. |
Once the project is open in Codex, teammates can ask directly for reports. These are the highest-value starting prompts.
How many deals did we scrape from Florida this week?
Pull directly from Airtable and show the matching deal titles.
Generate a report of deal counts by state for the last 7 days.
Return the top 10 states and save the raw output to runtime/data.
Using title-based industry classification, how many HVAC,
Plumbing, and Landscaping deals did we scrape this month?
Compare Florida deal counts for 2025 versus 2026
using Airtable Created At dates and title-based industry categories.
If a teammate prefers commands over natural language, this is the core pattern Codex can run:
python3 scripts/fetch_airtable_deals.py --output runtime/data/deals.json
python3 scripts/enrich_deals.py --deals runtime/data/deals.json --output runtime/data/deals_enriched.json
# Then analyze runtime/data/deals_enriched.json for:
# - created_at in the last 7 days
# - state == Florida
# - optional title-based industry counts
Once a colleague has repo access plus Codex access, the reporting surface becomes much broader than the scheduled flows. They can answer questions by state, date range, title-based industry, or AE-specific matching without editing the production cron jobs.
You asked whether this should be hosted on Cloudflare. For this documentation, Cloudflare is not necessary yet. The content includes internal operational guidance tied to a private repository and internal credentials. The safer default is:
| # | Name | Description |
|---|---|---|
| 1 | Keep the HTML in the private repo | Share the file or the repo path only with approved teammates. |
| 2 | Use Cloudflare only for a sanitized version | If you want broader distribution, remove any mention of internal credential handling and private repo assumptions first. |
| 3 | Use this guide as onboarding | Pair it with a short checklist for GitHub access, Codex install, and first successful Airtable report. |