Company Detail

G2i Inc.
Member Since,
Login to View contact details
Login

About Company

Job Openings

  • Job DescriptionJob DescriptionNow prioritizing candidates with a resea... Read More
    Job DescriptionJob Description

    Now prioritizing candidates with a research, simulation, or systems focus!

    List of accepted countries and locations

    If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.

    Help train large-language models (LLMs) to write clean, high-performance scientific code.

    Your expertise will support this feedback loop:

    Compare & rank AI-generated code used in scientific or data-heavy environments

    Repair & refactor code in MATLAB, Zig, or related tools

    Inject feedback to help the model learn how to reason through complex logic

    End result: The model gets better at working in research, simulation, or performance-critical domains.

    What You’ll Need

    3+ years of software engineering experience in Python

    Familiarity with MATLAB (academic/research) or Zig (low-level performance)

    Ability to assess and explain code quality with precision

    Excellent written communication and attention to detail

    Comfortable in async, remote workflows

    What You Don’t Need

    No RLHF or machine learning experience required

    Tech Stack

    We're looking for strength in MATLAB, Zig, or scientific computing tools.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing candidates with a resea... Read More
    Job DescriptionJob Description

    Now prioritizing candidates with a research, simulation, or systems focus!

    List of accepted countries and locations

    If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.

    Help train large-language models (LLMs) to write clean, high-performance scientific code.

    Your expertise will support this feedback loop:

    Compare & rank AI-generated code used in scientific or data-heavy environments

    Repair & refactor code in MATLAB, Zig, or related tools

    Inject feedback to help the model learn how to reason through complex logic

    End result: The model gets better at working in research, simulation, or performance-critical domains.

    What You’ll Need

    3+ years of software engineering experience in Python

    Familiarity with MATLAB (academic/research) or Zig (low-level performance)

    Ability to assess and explain code quality with precision

    Excellent written communication and attention to detail

    Comfortable in async, remote workflows

    What You Don’t Need

    No RLHF or machine learning experience required

    Tech Stack

    We're looking for strength in MATLAB, Zig, or scientific computing tools.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing candidates with a resea... Read More
    Job DescriptionJob Description

    Now prioritizing candidates with a research, simulation, or systems focus!

    List of accepted countries and locations

    If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.

    Help train large-language models (LLMs) to write clean, high-performance scientific code.

    Your expertise will support this feedback loop:

    Compare & rank AI-generated code used in scientific or data-heavy environments

    Repair & refactor code in MATLAB, Zig, or related tools

    Inject feedback to help the model learn how to reason through complex logic

    End result: The model gets better at working in research, simulation, or performance-critical domains.

    What You’ll Need

    3+ years of software engineering experience in Python

    Familiarity with MATLAB (academic/research) or Zig (low-level performance)

    Ability to assess and explain code quality with precision

    Excellent written communication and attention to detail

    Comfortable in async, remote workflows

    What You Don’t Need

    No RLHF or machine learning experience required

    Tech Stack

    We're looking for strength in MATLAB, Zig, or scientific computing tools.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing engineers with infrastr... Read More
    Job DescriptionJob Description

    Now prioritizing engineers with infrastructure and automation experience!

    List of accepted countries and locations

    If you're comfortable working with IaC tools, build systems, or CI/CD pipelines—and you enjoy reviewing and explaining code—we’d love to hear from you.

    Help train large-language models (LLMs) to write production-grade infrastructure code.

    You’ll play a key role in the human feedback loop:

    Compare & rank IaC scripts or automation logic, explaining which is best and why

    Repair & refactor AI-generated Bash, Terraform, and config files for correctness and efficiency

    Inject feedback (ratings, edits, test results) into the RLHF pipeline to keep it running smoothly

    End result: The model learns to provision, configure, and automate the way you do.

    What You’ll Need

    3+ years of professional experience in DevOps, SRE, or platform engineering

    Proficiency in one of more of the following Bash, Shell, Terraform, YAML, HCL, or CMake

    Strong instincts for debugging and refactoring automation code

    Excellent written communication—explaining why it matters a lot here

    Comfort working in an async, low-oversight environment

    What You Don’t Need

    No prior RLHF (Reinforcement Learning with Human Feedback) or AI experience

    No deep machine learning background—we’ll teach you how the system works

    Tech Stack

    Python for core tasks; IaC, shell, and build tooling are your domain.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing candidates with a resea... Read More
    Job DescriptionJob Description

    Now prioritizing candidates with a research, simulation, or systems focus!

    List of accepted countries and locations

    If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.

    Help train large-language models (LLMs) to write clean, high-performance scientific code.

    Your expertise will support this feedback loop:

    Compare & rank AI-generated code used in scientific or data-heavy environments

    Repair & refactor code in MATLAB, Zig, or related tools

    Inject feedback to help the model learn how to reason through complex logic

    End result: The model gets better at working in research, simulation, or performance-critical domains.

    What You’ll Need

    3+ years of software engineering experience in Python

    Familiarity with MATLAB (academic/research) or Zig (low-level performance)

    Ability to assess and explain code quality with precision

    Excellent written communication and attention to detail

    Comfortable in async, remote workflows

    What You Don’t Need

    No RLHF or machine learning experience required

    Tech Stack

    We're looking for strength in MATLAB, Zig, or scientific computing tools.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing engineers who’ve worked... Read More
    Job DescriptionJob Description

    Now prioritizing engineers who’ve worked with frontend templates or automated scripts.

    List of accepted countries and locations

    If you’ve written Handlebars templates or built workflow tools with Google Apps Script, this role is for you.

    Help train large-language models (LLMs) to generate maintainable UI and workflow automation code.

    You’ll engage in the human feedback loop:

    Compare & rank templating or automation scripts, explaining your reasoning

    Repair & refactor AI-generated code for logic, structure, and best practices

    Inject feedback to guide how the model writes and fixes real-world frontend/automation logic

    End result: The model learns to write scripts and templates you’d actually ship.

    What You’ll Need

    3+ years of software engineering experience, ideally with UI or scripting tools

    Familiarity with Handlebars, Google Apps Script, or similar templating/automation stacks

    A sharp eye for code quality and the ability to justify changes clearly

    Enjoy asynchronous collaboration and independent work

    What You Don’t Need

    No prior experience with RLHF, AI, or ML required

    Tech Stack

    You’ll be focused on legacy front-end and workflow scripting technologies.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing candidates with a resea... Read More
    Job DescriptionJob Description

    Now prioritizing candidates with a research, simulation, or systems focus!

    List of accepted countries and locations

    If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.

    Help train large-language models (LLMs) to write clean, high-performance scientific code.

    Your expertise will support this feedback loop:

    Compare & rank AI-generated code used in scientific or data-heavy environments

    Repair & refactor code in MATLAB, Zig, or related tools

    Inject feedback to help the model learn how to reason through complex logic

    End result: The model gets better at working in research, simulation, or performance-critical domains.

    What You’ll Need

    3+ years of software engineering experience in Python

    Familiarity with MATLAB (academic/research) or Zig (low-level performance)

    Ability to assess and explain code quality with precision

    Excellent written communication and attention to detail

    Comfortable in async, remote workflows

    What You Don’t Need

    No RLHF or machine learning experience required

    Tech Stack

    We're looking for strength in MATLAB, Zig, or scientific computing tools.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing candidates with a resea... Read More
    Job DescriptionJob Description

    Now prioritizing candidates with a research, simulation, or systems focus!

    List of accepted countries and locations

    If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.

    Help train large-language models (LLMs) to write clean, high-performance scientific code.

    Your expertise will support this feedback loop:

    Compare & rank AI-generated code used in scientific or data-heavy environments

    Repair & refactor code in MATLAB, Zig, or related tools

    Inject feedback to help the model learn how to reason through complex logic

    End result: The model gets better at working in research, simulation, or performance-critical domains.

    What You’ll Need

    3+ years of software engineering experience in Python

    Familiarity with MATLAB (academic/research) or Zig (low-level performance)

    Ability to assess and explain code quality with precision

    Excellent written communication and attention to detail

    Comfortable in async, remote workflows

    What You Don’t Need

    No RLHF or machine learning experience required

    Tech Stack

    We're looking for strength in MATLAB, Zig, or scientific computing tools.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing developers with experie... Read More
    Job DescriptionJob Description

    Now prioritizing developers with experience in game scripting or creative tech!

    List of accepted countries and locations

    If you’ve built games, contributed to a MUD, or worked with a game engine like Godot, your skills are highly relevant here.

    Help train large-language models (LLMs) to generate smart, playable game code.

    You’ll participate in the human feedback loop:

    Compare & rank code snippets for game logic, scripting, or scene-building

    Repair & refactor AI-generated code in Lua, GDScript, or LPC for correctness and clarity

    Inject feedback into the RLHF system to shape how the model reasons about code

    End result: The model learns how to write, critique, and debug game code like a real developer.

    What You’ll Need

    3+ years of hands-on development experience (indie game dev counts!)

    Familiarity with Lua, GDScript, or LPC/MudOS

    Strong instincts for debugging gameplay or scripting bugs

    Clarity in writing—this job is about explaining what’s wrong and why

    Happy working asynchronously in a remote setup

    What You Don’t Need

    No prior AI/ML/RLHF background required

    Tech Stack

    We welcome experience with scripting languages, game engines, and custom tooling.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less
  • Job DescriptionJob DescriptionNow prioritizing candidates with a resea... Read More
    Job DescriptionJob Description

    Now prioritizing candidates with a research, simulation, or systems focus!

    List of accepted countries and locations

    If you’ve worked in academic labs, simulation environments, or low-level performance engineering, this role is built for you.

    Help train large-language models (LLMs) to write clean, high-performance scientific code.

    Your expertise will support this feedback loop:

    Compare & rank AI-generated code used in scientific or data-heavy environments

    Repair & refactor code in MATLAB, Zig, or related tools

    Inject feedback to help the model learn how to reason through complex logic

    End result: The model gets better at working in research, simulation, or performance-critical domains.

    What You’ll Need

    3+ years of software engineering experience in Python

    Familiarity with MATLAB (academic/research) or Zig (low-level performance)

    Ability to assess and explain code quality with precision

    Excellent written communication and attention to detail

    Comfortable in async, remote workflows

    What You Don’t Need

    No RLHF or machine learning experience required

    Tech Stack

    We're looking for strength in MATLAB, Zig, or scientific computing tools.

    Logistics

    Location: Fully remote — work from anywhere

    Compensation: $30–$70/hr depending on location and seniority

    Hours: Minimum 15 hrs/week, up to 40 hrs/week available

    Engagement: 1099 contract

    Straightforward impact. Zero fluff.
    If this sounds like a fit, apply here!

    Read Less

Company Detail

  • Is Email Verified
    No
  • Total Employees
  • Established In
  • Current jobs

Google Map

For Jobseekers
For Employers
Contact Us
Astrid-Lindgren-Weg 12 38229 Salzgitter Germany