AI Agent Evaluation Analyst (Freelance) - Part-time

Remote, USA Full-time
This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English proficiency. At , innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. What we do The Mindrift platform, launched and powered by , connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe. Who we're looking for: We’re looking for curious and intellectually proactive contributors, the kind of person who double-checks assumptions and plays devil’s advocate. Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated? This is a flexible, project-based opportunity well-suited for: • Analysts, researchers, or consultants with strong critical thinking skills. • Students (senior undergrads / grad students) looking for an intellectually interesting gig. • People open to a part-time and non-permanent opportunity. About the project: We’re on the hunt for QAs for autonomous AI agents for a new project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. Throughout the project, you’ll have to balance quality assurance, research, and logical problem-solving. This project opportunity is ideal for people who enjoy looking at systems holistically and thinking through scenarios, implications, and edge cases. You do not need a coding background, but you must be curious, intellectually rigorous, and capable of evaluating the soundness and consistency of complex setups. If you’ve ever excelled in things like consulting, CHGK, Olympiads, case solving, or systems thinking — you might be a great fit. What you’ll be doing: • Reviewing evaluation tasks and scenarios for logic, completeness, and realism. • Identifying inconsistencies, missing assumptions, or unclear decision points. • Helping define clear expected behaviors (gold standards) for AI agents. • Annotating cause-effect relationships, reasoning paths, and plausible alternatives. • Thinking through complex systems and policies as a human would to ensure agents are tested properly. • Working closely with QA, writers, or developers to suggest refinements or edge case coverage. How to get started: Apply to this post, qualify, and get the chance to contribute to a project aligned with your skills, on your own schedule. Shape the future of AI while building tools that benefit everyone. Requirements • Excellent analytical thinking: Can reason about complex systems, scenarios, and logical implications. • Strong attention to detail: Can spot contradictions, ambiguities, and vague requirements. • Familiarity with structured data formats: Can read, not necessarily write JSON/YAML. • Ability to assess scenarios holistically: What's missing, what’s unrealistic, what might break? • Good communication and clear writing (in English) to document your findings. We also value applicants who have: • Experience with policy evaluation, logic puzzles, case studies, or structured scenario design. • Background in consulting, academia, olympiads (e.g. logic/math/informatics), or research. • Exposure to LLMs, prompt engineering, or AI-generated content. • Familiarity with QA or test-case thinking (edge cases, failure modes, “what could go wrong”). • Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.). Benefits • Get paid for your expertise, with rates that can go up to $80/hour depending on your skills, experience, and project needs. • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments. • Participate in an advanced AI project and gain valuable experience to enhance your portfolio. • Influence how future AI models understand and communicate in your field of expertise. This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English proficiency. At , innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. What we do The Mindrift platform, launched and powered by , connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe. Who we're looking for: We’re looking for curious and intellectually proactive contributors, the kind of person who double-checks assumptions and plays devil’s advocate. Are you comfortable with ambiguity and complexity? Does an async, remote, flexible opportunity sound exciting? Would you like to learn how modern AI systems are tested and evaluated? This is a flexible, project-based opportunity well-suited for: • Analysts, researchers, or consultants with strong critical thinking skills. • Students (senior undergrads / grad students) looking for an intellectually interesting gig. • People open to a part-time and non-permanent opportunity. About the project: We’re on the hunt for QAs for autonomous AI agents for a new project focused on validating and improving complex task structures, policy logic, and agent evaluation frameworks. Throughout the project, you’ll have to balance quality assurance, research, and logical problem-solving. This project opportunity is ideal for people who enjoy looking at systems holistically and thinking through scenarios, implications, and edge cases. You do not need a coding background, but you must be curious, intellectually rigorous, and capable of evaluating the soundness and consistency of complex setups. If you’ve ever excelled in things like consulting, CHGK, Olympiads, case solving, or systems thinking — you might be a great fit. What you’ll be doing: • Reviewing evaluation tasks and scenarios for logic, completeness, and realism. • Identifying inconsistencies, missing assumptions, or unclear decision points. • Helping define clear expected behaviors (gold standards) for AI agents. • Annotating cause-effect relationships, reasoning paths, and plausible alternatives. • Thinking through complex systems and policies as a human would to ensure agents are tested properly. • Working closely with QA, writers, or developers to suggest refinements or edge case coverage. How to get started: Apply to this post, qualify, and get the chance to contribute to a project aligned with your skills, on your own schedule. Shape the future of AI while building tools that benefit everyone. Requirements • Excellent analytical thinking: Can reason about complex systems, scenarios, and logical implications. • Strong attention to detail: Can spot contradictions, ambiguities, and vague requirements. • Familiarity with structured data formats: Can read, not necessarily write JSON/YAML. • Ability to assess scenarios holistically: What's missing, what’s unrealistic, what might break? • Good communication and clear writing (in English) to document your findings. We also value applicants who have: • Experience with policy evaluation, logic puzzles, case studies, or structured scenario design. • Background in consulting, academia, olympiads (e.g. logic/math/informatics), or research. • Exposure to LLMs, prompt engineering, or AI-generated content. • Familiarity with QA or test-case thinking (edge cases, failure modes, “what could go wrong”). • Some understanding of how scoring or evaluation works in agent testing (precision, coverage, etc.). Benefits • Get paid for your expertise, with rates that can go up to $80/hour depending on your skills, experience, and project needs. • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments. • Participate in an advanced AI project and gain valuable experience to enhance your portfolio. • Influence how future AI models understand and communicate in your field of expertise. Apply tot his job
Apply Now

Similar Jobs

[Remote] AI/ML Engineer (remote US citizens only) - Top Healthtech AI start-up SF Bay

Remote, USA Full-time

Data Analyst – LLM Automation & Scoring (Remote)

Remote, USA Full-time

Senior AI Engineer (Remote)

Remote, USA Full-time

Online Entry-Level AI Positions | $25-$35 Per Hour | Complete-Time

Remote, USA Full-time

Research Analyst - AI Content (Remote)

Remote, USA Full-time

Enterprise Imaging / Artificial Intelligence (AI) Analyst (Remote)

Remote, USA Full-time

AI Reasoning Analyst (Remote)

Remote, USA Full-time

Accounting Process & AI Technology Expert (Remote North America or Europe)

Remote, USA Full-time

AI Research & Policy Analyst Intern (part-time, Remote)

Remote, USA Full-time

Sr Director Analyst - AI's Impact HR Workforce; Remote - U.S

Remote, USA Full-time

Digital Advertising Strategist (Remote)

Remote, USA Full-time

Workday Benefits Associate Consultant

Remote, USA Full-time

Experienced Remote Student Researcher – Artificial Intelligence, Machine Learning, and Data Science Opportunities with Flexible Work Arrangements and Competitive Compensation

Remote, USA Full-time

Legal Compliance Senior Analyst - Cura Script - Remote United States Work at Home

Remote, USA Full-time

VRP Advocate (Bilingual Preferred)

Remote, USA Full-time

Crisis Hotline Responder

Remote, USA Full-time

Natural Resource Specialist III, Wildlife Biologist

Remote, USA Full-time

Entry Level Data Entry Specialist – Remote Work Opportunity for Career Growth and Development at arenaflex

Remote, USA Full-time

Experienced Full Stack DevOps Engineer - Cloud Application Development & Delivery at Aetna (Remote)

Remote, USA Full-time

Experienced Full Stack Data Analytics Manager – Business Intelligence and Data Science at Blithequark

Remote, USA Full-time
Back to Home