Real-World AI Automation Proficiency Exam

You can register for the exam at the following link:
https://jurumani.app.n8n.cloud/form/1f959489-6a78-40eb-87c3-af42457f92e1

A full-day practical examination designed to assess job-readiness by simulating a real-world automation development project. This portal outlines the structure, challenges, and evaluation criteria.

Exam Day Schedule

1
09:00 - 09:30
Introduction & Briefing: Set expectations, explain the structure, and distribute materials.
2
09:30 - 12:30
Build Session 1: Focus on Tiers 1 (Foundational) & 2 (Intelligent) workflows.
3
12:30 - 13:00
Lunch Break
4
13:00 - 15:00
Build Session 2: Tackle Tier 3 (Advanced), debug, refine, and document.
5
15:00 - 17:00
Demonstrations: Learners present their solutions, design choices, and business value.

The Three-Tiered Challenge

The core of the exam is a practical build session. The tasks are designed with progressive difficulty, allowing learners to demonstrate foundational skills while enabling high-achievers to showcase advanced, agentic design patterns.

Environment & Toolkit

To ensure a fair and standardized testing environment, learners are provided with a controlled set of resources and are expected to configure their local development environment.

Local Deployment

Deploy a local n8n instance using Docker. This is a mandatory first step to test self-sufficiency.

docker-compose up -d

Bonus: Connect the n8n instance to a separate PostgreSQL container for production-grade data persistence.

Test Data Package

A ZIP file will be provided containing a mix of documents to test classification and data extraction logic.

  • 5 Invoices (PDF)
  • 3 Receipts (JPG)

Mock ERP API Endpoint

A stable, fake API will be provided for testing the final stage of the workflow. This ensures success is based on logic, not external service reliability. The endpoint will be `POST /api/v1/transactions`.

Parameter Description
HTTP MethodPOST
HeadersContent-Type: application/json, X-API-Key: <key>
Request Body{"documentId": "string", "vendorName": "string", "totalAmount": number, "vat": number, "transactionDate": "YYYY-MM-DD",...}
Success (201){"status": "success", ...}
Error (400/401){"status": "error", ...}

Evaluation Framework

A standardized rubric ensures objective and fair assessment. Evaluation covers technical implementation, documentation quality, and the professionalism of the final demonstration.

Assessment Category Weighting

Detailed Assessment Rubric

Category Novice (0-5) Competent (6-10) Proficient (11-15) Expert (16-20)
Environment Fails setup Needs help Sets up Docker Completes bonus Postgres
Tier 1 Incomplete Has bugs Fully functional Flawless w/ enhancements
Tier 2 No attempt Poor extraction/no error handling Structured data & error handling Advanced prompt & detailed notifications
Tier 3 No attempt Broken logic Functional HITL Polished & robust HITL
Documentation None Minimal/unclear Clear README Exhaustive & maintainable
Demonstration Unstructured Lacks business context Clear Why/How/What Compelling narrative