Software Engineer / Systems Builder

Aditya Jhaveri builds internal platforms, distributed systems, and practical AI workflows.

Graduate CS engineer turning operational bottlenecks into fast, usable software.

Every Computer Science journey begins with Hello World. Mine began with Age of Empires, which shaped how I think about systems, tradeoffs, and execution. Today I focus on backend-heavy products that improve reliability, scale data pipelines, and make internal teams move faster.

Internal SaaS AI Workflow Automation Distributed Systems Data Platforms
Aditya Jhaveri
systems in orbit
Impact Console Live Snapshot
3B+ Daily records supported across large-scale data workflows
10+ REST endpoints shipped for internal NYU workflow modernization
40% Retry failure reduction after concurrency-safe form delivery work
35% Transfer time improvement from Kafka-based file movement architecture

< About Me />

About Me

I build software that makes complex workflows easier to run.

The orbit map below shows how that focus formed across education, internships, distributed systems work, and internal product delivery.

Career Orbit Map

Hover Any Orbit
Hover a ring or planet to inspect that phase.

< Professional Experience />

Software Developer

New York University
Feb 2025 - Present
Internal SaaS / Ops

Developing workflow software for NYU's Global Enrollment Management and Student Success team, supporting enrollment operations, staff communications, and service processes used by campus administrators.

Role Focus

Partner with EM tech admins to design and improve Google Cloud-based SaaS solutions across the full lifecycle: backend logic analysis, front-end development, documentation, alpha testing, and production optimization.

Impact Highlights

  • Migrated production Google Apps Script tools toward a Node.js + MongoDB architecture, enabling real-time usage and a consistent 60-second sync cycle
  • Engineered a fault-tolerant, concurrency-safe form delivery system, reducing retry failures by 40% during peak submissions
  • Built and owned a placeholder-driven email template platform that auto-populates user, credential, and device data, reducing email management effort by 55%
  • Implemented compound label-based filtering (UI + backend query logic), reducing lookup time across 2,000+ records
  • Improved Google Apps Script retrieval speed by 50% by replacing service-layer calls with direct Google Sheets API access
  • Architected a PoC migrating 1,000+ Google Sheets records to Firestore and Google Cloud SQL for scalability and reliability
  • Built an automated Shipping Label Page that reduced manual processing time by 50%

Stack & Skills

JavaScript Google Apps Script Google Sheets API Node.js MongoDB HTML UI/UX Testing Technical Documentation

Software Development Engineer

Sainapse
July 2022 - May 2024
Data Platforms

Worked in Sainapse's Research, Technology and Platform department, building platform capabilities that supported large-scale data ingestion, transfer, and downstream analytics and ML workflows.

Role Focus

Developed PoCs, solved complex production bugs, and implemented performance-focused platform improvements across microservices, storage systems, and data transfer pipelines on AWS (Linux).

Impact Highlights

  • Optimized microservice file transfers via Apache Kafka using byte-level serialization/deserialization and parallel file distribution, reducing time complexity by 50% and space complexity by 33%
  • Spearheaded HDFS implementation in Java for high-volume ingestion, storing 3B+ daily records with Hive across 10+ microservices and improving average system performance by 37%
  • Designed a batch data adapter attached to BigQuery tables, enabling on-demand transfer of 2B+ rows for downstream ML workflows including free-text search and deduplication
  • Improved extraction speed from 3GB+ XLSX and DOCX files by 30% by updating parsing logic with the Apryse Java SDK
  • Pioneered an Apache Thrift proof of concept for cross-language model development across Java, Python, and Scala
  • Mentored 4 university interns on product architecture and core technology stack, enabling independent feature delivery

Stack & Skills

Java Apache Kafka HDFS Apache Hive AWS (Linux) BigQuery Apryse SDK Apache Thrift Python Scala Data Pipelines PoCs Mentorship

Data Science Intern

AiDash
Jan 2022 - June 2022
GeoAI / Remote Sensing

Supported geospatial ML work at AiDash by developing vegetation classification methods and LANDSAT image classification workflows for remote-sensing use cases.

Role Focus

Built statistical approaches for vegetation classification, trained and evaluated LANDSAT machine learning models, and assessed model quality using metrics such as accuracy, precision, recall, and F1 score.

Impact Highlights

  • Evaluated ResNet architectures with Keras and TensorFlow for LANDSAT classification, achieving 60%-75% accuracy across experiments
  • Devised a Python-based grassland classification method to improve land ranking, achieving 78% accuracy and ranking 1st among peer solutions
  • Built a Python script to automate satellite image labeling in QGIS, reducing manual processing time by 80%
  • Resolved geospatial data handling during onboarding by performing CRUD operations on 10,000+ shapefiles using GeoPandas and PostgreSQL

Stack & Skills

Python TensorFlow Keras ResNet LANDSAT QGIS GeoPandas PostgreSQL Geospatial ML Model Evaluation Precision/Recall/F1 Data Labeling

Data Analyst Intern

PayPal
May 2020 - June 2020
Forecasting / Analytics

Supported demand analytics at PayPal by developing and maintaining forecasting models to identify trends in customer demand behavior.

Role Focus

Built forecasting models, evaluated regression performance across multiple loss functions, and translated findings into a clear presentation with a peer team.

Impact Highlights

  • Identified 10+ customer demand trends and developed forecasting models with R² values close to 0.9
  • Achieved R² scores of 0.98, 0.91, and 0.85 for squared, absolute, and infinite loss functions in a linear regression problem, demonstrating robustness to outliers
  • Collaborated with a team of 4 peers to deliver a comprehensive presentation on forecasting customer demand

Stack & Skills

Python Forecasting Linear Regression Regression Loss Functions R² Evaluation Model Evaluation Data Analysis Presentation

< Featured Projects />

A few selected builds that best represent how I approach product constraints, backend systems, and measurable technical impact.

Coursework / CV Portfolio

Computer Vision Portfolio

Organized CV portfolio spanning restoration, segmentation, tracking, multimodal Kaggle workflows, and geolocation, with reproducibility notes and reusable scripts.

PyTorch OpenCV CellPose YOLO Albumentations
Portfolio / Product Layer

Portfolio Chatbot

Embedded recruiter-facing assistant for this site that retrieves context from structured markdown notes and responds with grounded portfolio answers.

JavaScript PHP Markdown Retrieval Gemini API

< Technical Skills />

I work across backend systems, data-heavy services, and internal product delivery. These are the capability areas I reach for most often, rather than an exhaustive inventory of every tool I have touched.

Languages

Core

Primary programming languages I use across systems work, APIs, scripting, and front-end implementation.

Java Python JavaScript (ES6+) C++ SQL Bash HTML/CSS TypeScript

Frameworks & Backend

App & API

Frameworks and backend patterns I use for web apps, APIs, service layers, and application architecture.

Spring Boot JPA (Hibernate) REST APIs MVC Express.js Swagger Maven Bootstrap Node React FastAPI

Distributed Systems

Scale

Technologies I have used for messaging, large-scale data processing, and cross-service communication.

Kafka Hadoop Spark Apache Thrift

Data, ML & Analytics

Modeling

Libraries and data tooling I use for analytics, machine learning, geospatial work, retrieval, and LLM workflows.

TensorFlow KERAS NumPy Pandas SciPy Scikit-Learn GEOPANDAS OpenCV QDRANT Pig Hive UNSLOTH LangChain

Cloud, DevOps & Tools

Delivery

Cloud platforms, storage systems, testing, and workflow tools I use for shipping and maintaining software.

AWS GCP BIGQUERY PostgreSQL MySQL MongoDB FIRESTORE Docker Version Control (Git, CI/CD) Postman JIRA IntelliJ PyCharm JUPYTER Notebook Visual Studio Code QGIS JUnit PYTEST Agile NoSQL Cassandra

< Education />

Master of Science in Computer Science

New York University - Tandon School of Engineering

Sep 2024 - May 2026
Graduate
NYU Tandon Computer Science New York, USA

Coursework focus: Algorithms, Big Data, Options Pricing and Stochastic Calculus, Search Engines.

Bachelor of Engineering in Computer Science

Birla Institute of Technology and Science, Pilani

Aug 2018 - July 2022 | Minor in Data Science
Undergraduate
BITS Pilani Computer Science Minor: Data Science

Coursework focus: Data Mining, Deep Learning, Machine Learning, Natural Language Processing.

< Resume />

Preview my latest experience right here on the site or grab a copy to review later.

open docs/resume.pdf · updated 2025

Preview unavailable in this browser.

Download PDF

< Let's Build Something />

If you want to talk through backend systems, internal tools, AI-enabled workflows, or a role where I can help ship reliable software, reach out directly or send a note below.

[>>] LinkedIn
[</>] GitHub
[*] Jersey City, NJ

Rocket