Company Detail

The Recruiting Guy
Member Since,
Login to View contact details
Login

About Company

Job Openings

  • Software Engineer  

    - San Jose
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Boulder
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Raleigh
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Los Angeles
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Washington
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Round Rock
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Denver
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Arlington
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - Boston
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less
  • Software Engineer  

    - New York
    Job DescriptionJob DescriptionNote: This position is open to remote ap... Read More
    Job DescriptionJob DescriptionNote: This position is open to remote applicants based in the US only.Job Title: Software Engineer (Data Platform)Location:& Remote. United States ONLY.Employment Type:& Salaried W2 Full-Time.Salary: $125,000 - $200,000About the company

    We represent a rapidly growing data company in NYC that’s redefining how real-world assets are represented and traded on public blockchains. Their platform serves investors, issuers, and financial institutions by providing reliable analytics, market intelligence, and transparent data on tokenized assets across the globe.

    They’re trusted by leading players in finance and blockchain for their accuracy, scale, and forward-thinking approach to digital asset infrastructure. It’s an exciting opportunity to join a team that’s helping shape the future of real-world asset tokenization and build technology that’s changing how the financial world connects.

    Responsibilities

    Build and scale core data systems and APIs that serve product-level analytics

    Collaborate with application engineers to ensure clean data flow between backend systems and end-user features

    Develop and optimize data pipelines using PySpark and Databricks

    Work closely with the lead data engineer on system architecture and data infrastructure design

    Participate in system design discussions focused on scalability, performance, and maintainability

    Contribute to the full software development lifecycle, from design through deployment

    Support product and engineering teams by turning raw data into usable insights

    &

    Ideal Background

    4 to 5+ years of software engineering experience, preferably focused on large-scale data systems

    Strong proficiency in Python and experience with PySpark

    Experience with distributed frameworks such as Apache Spark, Beam, Flink, or Kafka Streams

    Proven ability to design, build, and maintain production-grade data pipelines and APIs

    Background in computer science, computer engineering, applied mathematics, or a related field (top 50 university or equivalent rigor preferred)

    Experience working on data-driven products rather than internal BI or reporting systems

    Strong communication skills and the ability to explain technical tradeoffs clearly

    High attention to detail, ownership mindset, and a passion for building high-quality systems


    Nice to Have

    Experience in fintech, blockchain, or other data-intensive environments

    Hands-on experience with Databricks or real-time streaming data systems

    Demonstrated curiosity and craftsmanship through side projects or open-source work

    Read Less

Company Detail

  • Is Email Verified
    No
  • Total Employees
  • Established In
  • Current jobs

Google Map

For Jobseekers
For Employers
Contact Us
Astrid-Lindgren-Weg 12 38229 Salzgitter Germany