About Me

Hello! I'm Bhanu Prakash Vangala

Doctoral Student · Research & Teaching Assistant

University of Missouri — Radiant Lab & Data Intensive Computing Lab

Google PhD Nominee ’25 (Top 3 of 6000) NASA · DoD · NSF Funded Research AAAI ’25 · ACM SRC ’25

Research Interests

LLMs NLP Agentic AI Trustworthy AI Scalable ML Systems Reproducibility & Provenance

I am a doctoral student and research & teaching assistant at the University of Missouri with 3+ years of NASA, DoD, and NSF-funded research experience. My work has been recognized through the Google PhD Fellowship Nomination (2025)—ranked in the top 3 scholars out of 6,000 participants—and the Outstanding Master’s Student Award (2025) from the College of Engineering.

My research sits at the intersection of LLM reproducibility, agentic and trustworthy AI, and scalable ML systems. I build self-monitoring LLM infrastructure such as the Pick-and-Spin router for adaptive inference on serverless GPUs, and design factuality pipelines like HalluMat and HalluFormer that drive a 30% reduction in hallucinations for materials science assistants deployed on HPC clusters.

Outside the lab, I mentor 115+ engineers across MERN and NLP courses, orchestrate reproducible container workflows with provenance tracking for collaborative science teams, and contribute to the Missouri AI community through hackathons, reviewing, and outreach.

Research Areas of Focus

Trustworthy AI
Click to learn more
Scalable Systems
Click to learn more
Factuality
Click to learn more
Scientific AI
Click to learn more
0
Ph.D. GPA
0
Peer-reviewed Works
0
Years of Funded Research
0
Students Mentored

Research Supported By

NASA
Department of Defense (ERDC)
National Science Foundation
Available for internships & collaborations

News

  • 2025.05: Earned my M.S. in Computer Science (GPA: 4.0/4.0) from the University of Missouri, Columbia.
  • 2025.04: Received the Outstanding Master’s Student Award from the MU Department of Computer Science.
  • 2025.04: Selected as a Google PhD Fellowship Nominee (NLP) — ranked top 3 among 6,000 candidates across the university consortium.
  • 2025.04: Submitted a thesis proposal: “Trustworthy AI: Building Self-correcting and Self-evolving Models for Scientific Discovery.”
  • 2025.04: Presented our work on Hallucination Detection at AAAI Spring Symposium 2025 on AI for Scientific Discovery track
  • 2025.03: Started development of ReflectMemory, focused on persistent memory control for long-context LLM reasoning.
  • 2025.03: Deployed updated KubeLLM framework for multi-tenant LLM inference on GPU-based HPC clusters.
  • 2025.02: Achieved Runner-Up in the MUIDSI School for Generative AI for Social Good hackathon on VisionAI for Visually Impaired project.
  • 2025.01: Released benchmarking tools for hallucination detection in scientific LLMs, supporting hybrid evaluation methods.
  • 2024.09: Initiated documentation work on scalable LLM-as-a-Service infrastructure using Helm charts and node affinity scheduling.
  • 2024.01: Working as a TA for over 100 students in a web development course – guiding full-stack app development.
  • 2023.12: Led deployment of GPU-efficient LLM inference systems in the university’s Kubernetes-based HPC environment (Nautilus).
  • 2023.08: Began research on faithfulness, interpretability, and robustness in large generative language models.
  • 2023.06: Admitted to the Ph.D. program in Computer Science at the University of Missouri.
  • 2023.05: Graduated with a B.Tech in CSE (Data Analytics) from VIT Vellore.
  • 2023.04: Honored with the Excellence in Research Award at VIT for multilingual NLP and social media analytics contributions.
  • 2023.03: Volunteered as an AI Community Evangelist at Adobe, contributing to community education and developer engagement.
  • 2022.11: Served as an Internshala Student Partner (ISP), leading brand campaigns and peer mentoring on campus.
  • 2020: Joined the Brandiverse team as a creative contributor, working on outreach and media strategy.
  • 2021: Collaborated with the Synergy Team at VIT, supporting student experience initiatives and university development programs.

Publications

AAAI 2025
sym

HalluMat: Hallucination Detection in Scientific LLMs

B. P. Vangala, S. Mahmud, P. Neupane, J. Selvaraj, J. Cheng

  • Primary Innovation: Developed HalluMatData, a comprehensive benchmark dataset specifically designed for evaluating hallucination detection methods in domain-specific large language models applied to materials science research
  • Technical Framework: Implemented HalluMatDetector, a sophisticated multi-stage detection pipeline that combines intrinsic verification mechanisms, multi-source knowledge retrieval, contradiction graph analysis, and comprehensive metric-based assessment protocols
  • Quantitative Results: Demonstrated significant improvement with 30% reduction in hallucination rates compared to baseline LLM outputs, with particular effectiveness in high-entropy query scenarios across multiple materials science subdomains
  • Methodological Contribution: Introduced the Paraphrased Hallucination Consistency Score (PHCS), a novel metric for quantifying inconsistencies in LLM responses across semantically equivalent queries, providing deeper insights into model reliability patterns
  • Knowledge Integration: Combined knowledge graph-based contradiction detection with fine-grained factual verification techniques to establish a more reliable and interpretable framework for AI-assisted scientific discovery processes
  • Domain Impact: Addresses critical challenges in research integrity by providing tools to detect and mitigate factually incorrect or misleading information generation in scientific contexts
Show Abstract

Artificial Intelligence (AI), particularly Large Language Models (LLMs), is transforming scientific discovery, enabling rapid knowledge generation and hypothesis formulation. However, a critical challenge is hallucination, where LLMs generate factually incorrect or misleading information, compromising research integrity. To address this, we introduce HalluMatData, a benchmark dataset for evaluating hallucination detection methods, factual consistency, and response robustness in AI-generated materials science content. Alongside, we propose HalluMatDetector, a multi-stage hallucination detection framework integrating intrinsic verification, multi-source retrieval, contradiction graph analysis, and metric-based assessment to detect and mitigate LLM hallucinations. Our findings reveal that hallucination levels vary significantly across materials science subdomains, with high-entropy queries exhibiting greater factual inconsistencies. By utilizing HalluMatDetector’s verification pipeline, we reduce hallucination rates by 30% compared to standard LLM outputs. Furthermore, we introduce the Paraphrased Hallucination Consistency Score (PHCS) to quantify inconsistencies in LLM responses across semantically equivalent queries, offering deeper insights into model reliability. Combining knowledge graph-based contradiction detection and fine-grained factual verification, our dataset and framework establish a more reliable, interpretable, and scientifically rigorous approach for AI-driven discoveries.

AAAI 2025
sym

HalluFormer: Faithfulness Evaluation Framework

S. Mahmud, P. Neupane, J. Selvaraj, B. P. Vangala, J. Cheng

  • Architectural Innovation: Designed a novel transformer-based neural architecture specifically optimized for multi-dimensional consistency assessment between input questions, generated answers, and retrieved knowledge contexts in large language model outputs
  • Problem Reformulation: Transformed the traditionally complex hallucination detection challenge into a well-defined classification problem that systematically evaluates consistency patterns across diverse knowledge sources and contextual information
  • Empirical Performance: Achieved robust performance with F1 score of 0.9471 on the established MultiNLI test dataset, and demonstrated strong generalization capabilities with F1 score of 0.7285 on the blind ANAH evaluation dataset
  • Cross-Domain Generalization: Exhibits exceptional ability to detect hallucinations across previously unseen data distributions, indicating strong transferability and robustness of the underlying detection mechanisms
  • High-Stakes Applications: Successfully deployed and validated in critical scientific and clinical research environments where factual accuracy is paramount and erroneous information could have significant consequences
  • Methodological Advancement: Establishes transformer-based approaches as a viable and effective paradigm for improving the reliability and trustworthiness of large language model systems in knowledge-intensive domains
Show Abstract

Despite the impressive performance of Large Language Models (LLMs) in a variety of natural language processing tasks, they are still prone to producing information that is factually inaccurate, known as hallucination. In critical fields related to scientific and clinical domains that demand highly precise answers, the negative effect of this phenomenon is even more pronounced. To address this problem, we formulate the hallucination detection problem as a classification problem of assessing the consistency between questions, answers and retrieved knowledge contexts and propose HalluFormer, a transformer-based model for detecting hallucinations of LLMs. HalluFormer was trained and tested on the MultiNLI dataset. It achieves an F1 score of 0.9471 on the MultiNLI test dataset. On the blind ANAH test dataset, it achieves an F1 score of 0.7285, indicating it can generalize reasonably well to completely new data. The results demonstrate that transformer-based methods can be utilized to detect hallucinations of LLMs, paving the way for further research on improving the reliability of LLMs.

ACM SRC 2025
Adaptive LLM Orchestration Architecture

Adaptive Inference: Orchestrating Fine-Tuned LLMs with Serverless GPUs in HPC Environments

Bhanu Prakash Vangala, Tanu Malik

  • Pick-and-Spin router: Policy-driven orchestration that maps fine-tuned domain models to optimal GPU backends using a two-dimensional L × I deployment matrix.
  • Serverless GPUs on Kubernetes: Combines Knative and KEDA to scale inference workloads to zero and restart within seconds for cost-aware HPC usage.
  • Latency–cost balancing: Multi-objective scoring leverages DistilBERT relevance, latency predictors, and pricing signals to pick the best (model, backend) pair per request.
  • HPC integration: Deployed across NSF Nautilus and AWS SageMaker endpoints to support materials, biomedical, and geospatial assistants within the Radiant Lab toolchain.
Show Abstract

Large Language Models (LLMs) are increasingly fine-tuned for domain-specific applications in science, engineering, and industry. However, providing these models as a service (LLMaaS) or self hosting poses cost and efficiency challenges: commercial APIs impose high recurring fees, while running multiple fine-tuned models locally wastes expensive GPUs when they sit idle. We present Pick-and-Spin, a framework that combines policy-driven model selection with serverless GPU orchestration. We conceptualize deployment as a two-dimensional grid (L×I), where rows represent fine-tuned models and columns denote inference backends. When a user submits a prompt, Pick-and-Spin dynamically routes it to the most relevant and cost-effective (model, backend) pair, spinning up services on demand while scaling idle ones to zero. By unifying adaptive routing with serverless execution, Pick-and-Spin demonstrates a scalable and affordable pathway for delivering domain-specific LLMs in high-performance computing (HPC) research environments.

Master's Thesis
LLM-as-Service portfolio image

Deploying LLM-as-a-Service in Kubernetes HPC Clusters

Bhanu Prakash Vangala, Grant Scott, Jianlin Cheng

  • Foundational Infrastructure: Established the foundational Helm-chart-based deployment ecosystem that later evolved into the Pick-and-Spin framework, enabling scalable and reliable large language model inference services in HPC environments
  • Core Resource Management: Engineered fundamental GPU affinity scheduling algorithms, dynamic resource throttling mechanisms, and load balancing protocols that form the basis for advanced serverless orchestration
  • Multi-Tenancy Foundation: Designed secure, isolated multi-tenant access configurations supporting concurrent research workflows while maintaining data privacy and computational resource fairness
  • Performance Baseline: Established baseline performance metrics with 60% reduction in deployment overhead and 40% enhancement in resource utilization, providing the foundation for adaptive scaling improvements
  • Production Validation: Successfully deployed and validated the foundational system on the Nautilus distributed computing cluster, serving over 100 concurrent users and informing the development of dynamic orchestration capabilities
  • Architectural Foundation: Created the underlying architecture and design principles that enabled the evolution toward serverless, adaptive LLM orchestration with Pick-and-Spin framework
Show Abstract

This foundational work establishes the architectural principles and deployment strategies for large language models in high-performance computing (HPC) environments that later evolved into the Pick-and-Spin framework. It outlines the core Helm-chart-based approach for deploying containerized models with GPU affinity scheduling, resource throttling, and multi-user access configurations. The research provides the technical foundation for dynamic model orchestration, serverless scaling, and adaptive routing capabilities that enable efficient multi-model LLM deployments in research environments.

Project
sym

Brain Tumor Detection in MRI Images

B. P. Vangala, G. Khadakkar, F. Bunyak

  • Model Architecture: Developed and implemented two distinct convolutional neural network architectures: a custom-designed CNN optimized for brain tumor detection and a transfer learning approach utilizing the pre-trained ResNet50V2 architecture
  • Data Engineering: Processed comprehensive MRI image datasets with advanced data augmentation techniques including random flipping, rotation, and zooming transformations to enhance model robustness and reduce overfitting
  • Performance Optimization: Conducted systematic hyperparameter tuning and learning rate optimization to achieve optimal training parameters, resulting in improved accuracy and generalization capabilities
  • Evaluation Framework: Implemented rigorous model evaluation using multiple metrics including accuracy, precision, recall, and F1-score, with comprehensive visualization through confusion matrices and performance analytics
  • Clinical Relevance: Demonstrated the practical potential of neural networks in medical imaging diagnostics, contributing to automated diagnostic assistance tools for healthcare professionals
  • Technical Implementation: Utilized TensorFlow and Keras frameworks with Google Colab infrastructure for efficient model training and deployment
Show Abstract

Abstract—This study presents a comprehensive approach to detecting brain tumors using deep learning algorithms implemented in TensorFlow. The project develops two distinct convolutional neural network (CNN) models— a custom-designed CNN and the pre-trained ResNet50V2— to identify and classify brain tumor presence from MRI images across two datasets. Both models underwent rigorous training, evaluation, and optimization to enhance their accuracy and generalization capabilities. The custom CNN model included data augmentation techniques like random flipping, rotation, and zooming to reduce overfitting and improve model robustness. The performance of each model was meticulously analyzed through metrics such as accuracy, precision, recall, and F1-score, with results visualized using confusion matrices and performance charts. Additionally, learning rate optimization was performed to find the most effective training parameters. The study not only demonstrates the potential of neural networks in medical imaging diagnostics but also explores the effectiveness of model customization and transfer learning for practical applications in healthcare.

MICCAI
sym

Pneumonia Detection in Chest X-rays Using Deep Learning

B. P. Vangala, G. Khadakkar, F. Bunyak

  • Multi-Model Approach: Implemented and evaluated five distinct deep learning architectures including custom CNN, ResNet18, VGG16, ResNet50 with K-Fold Cross-Validation, and EfficientNet for comprehensive performance comparison
  • Dataset Scale: Trained on extensive Chest X-Ray Pneumonia dataset containing 5,216 training images, employing rigorous validation protocols using precision, recall, F1-score, and ROC-AUC metrics
  • Advanced Training Strategies: Utilized innovative techniques including K-fold cross-validation for robust model assessment and multi-GPU acceleration for enhanced computational efficiency
  • Performance Excellence: Achieved superior classification results with EfficientNet demonstrating the highest performance, validating the effectiveness of state-of-the-art architectures in medical image analysis
  • Clinical Integration: Designed solutions specifically for integration into clinical workflows to assist radiologists in diagnostic decision-making processes
  • Healthcare Impact: Addresses critical needs in global healthcare by providing automated, scalable pneumonia detection systems for improved patient outcomes worldwide
Show Abstract

Abstract—Pneumonia is a leading cause of morbidity worldwide, necessitating prompt and accurate diagnosis to improve patient outcomes. This study leverages deep learning techniques to automate the detection of pneumonia from chest X-ray images. Five models are evaluated, including a custom Convolutional Neural Network (CNN), ResNet18, VGG16, ResNet50 with K-Fold Cross-Validation, and EfficientNet. Pretrained architectures are fine-tuned on the publicly available Chest X-Ray Pneumonia dataset, with 5,216 training images, and validated using precision, recall, F1-score, and ROC-AUC metrics. Innovative training strategies such as K-fold cross-validation and multi-GPU acceleration are employed to enhance model robustness. Among the models, EfficientNet achieves the highest classification performance, demonstrating the effectiveness of state-of-the-art architectures in medical image classification tasks. The results suggest that deep learning models can offer a reliable, scalable solution for pneumonia detection, paving the way for integration into clinical workflows to assist radiologists in diagnostic decision-making.

IJARESM
sym

Image Colorization Using AI

B. P. Vangala, S. S. Somepalli, R. Chigurupati

  • Technical Innovation: Developed sophisticated deep learning models combining convolutional autoencoders and Generative Adversarial Network (GAN) architectures to achieve photorealistic image colorization from grayscale inputs
  • Architectural Design: Implemented advanced neural network architectures that learn complex color mappings and semantic understanding to produce naturally colored images with high fidelity
  • Training Methodology: Utilized large-scale image datasets with carefully designed loss functions and training protocols to ensure color accuracy and visual coherence across diverse image categories
  • Quality Assessment: Achieved superior results in both quantitative metrics and qualitative visual evaluation, demonstrating significant improvements over traditional colorization methods
  • Cultural Application: Successfully applied to historical photograph restoration and film colorization projects, contributing to cultural preservation and media enhancement initiatives
  • Research Impact: Published findings demonstrate the potential of AI-driven image processing techniques for creative applications and historical documentation preservation
Show Abstract

Colourization is a PC helped procedure of adding shading to a monochrome picture or film. The procedure involves typically segmenting pictures into areas and following these regions across picture sequences. Neither of these undertakings can be performed dependably by and by; thus, colourization requires extensive user mediation and remains a monotonous, tedious, and costly assignment. Colourization is a term presented by Wilson Markle in 1970 to describe the PC helped process he developed for including shading. Colourizing black and white movies is an old idea going back to 1902. For a considerable length of time, numerous filmmakers restricted colourizing their black and white motion pictures and considered it vandalism of their craft. Today it is acknowledged as an upgrade to the artistic expression. The innovation itself has moved from meticulous hand colourization to the present largely automated strategy. In India, the film Mughal-e-Azam, a blockbuster released in 1960 was remastered in color in 2004. People from different ages crowded the theatres to see it in color, and the movie was a huge hit for the second time!

Project
sym

KOO: Uncovering User Sentiments and Trends!

Bhanu Prakash Vangala

  • Advisors: Dr. P. Kumaraguru (IIT Hyderabad) and Dr. Soughbhagya Barpanda guided the thesis as part of PRECOG/KOO collaboration.
  • Multilingual NLP Pipeline: Engineered a comprehensive sentiment analysis system capable of processing and analyzing user-generated content across multiple languages including Hindi, English, Telugu, and other regional Indian languages
  • Advanced Machine Learning: Implemented state-of-the-art transformer-based models and traditional machine learning algorithms optimized for social media text processing and sentiment classification tasks
  • Real-time Processing: Developed scalable infrastructure for real-time sentiment analysis of millions of posts and comments, enabling immediate insights into user sentiment trends and platform dynamics
  • Platform Integration: Successfully integrated the system into KOO's content moderation workflow, providing automated sentiment-based content filtering and user experience enhancement mechanisms
  • Data Science Innovation: Created novel approaches for handling code-switching, informal language patterns, and cultural context in multilingual social media text analysis
  • Business Impact: Delivered actionable insights that improved user engagement metrics and enhanced platform safety through proactive sentiment-based content management strategies
Show Abstract

A multilingual sentiment analysis pipeline delivering real-time user sentiment trends on KOO to aid content moderation and enhance engagement.

Projects

  • ReflectMemory for Self-Correcting LLMs
    Built a memory module to store chain-of-thought embeddings and ensure reasoning consistency across inference rounds.
  • KubeLLM: LLM-as-a-Service Platform
    Scalable platform for GPU-accelerated LLM inference with Kubernetes, Helm, and autoscaling.
  • HalluMat & HalluFormer
    Evaluation pipelines benchmarking hallucination detection for scientific language models.
  • ChatMed: Medical Chatbot for Health Guidance
    Trained on BioGPT and PubMed articles to deliver symptom-based medical assistance.
  • CropInsight: AI for Agriculture
    Uses computer vision and sequence models to monitor crop health and forecast yields.
  • VisionAI: Hackathon Project @ Mizzou
    Real-time hazard indicator for visually impaired users, runner-up at the Mizzou hackathon.
  • SocialSift: Crisis-aware Sentiment Analysis
    Multilingual transformers analyze social media sentiment during natural disasters.

Honors and Awards

  • 2025.05
    Outstanding Master’s Student Award, College of Engineering, University of Missouri
  • 2025.03
    Runner-Up – MUIDSI Hackathon for VisionAI: AI-Powered Assistance for the Visually Impaired, awarded $1,000
  • 2025.04
    Selected for Google PhD Fellowship Nomination — ranked top 3 among 6,000 participants and one of three University of Missouri nominees
  • 2023
    Dean’s Research Excellence Award, Vellore Institute of Technology (VIT)
  • 2023
    Best Department Thesis Award, VIT for B.Tech thesis on multilingual sentiment analysis
  • 2022
    Runner-Up, VIT AI Tech-Thon
  • 2020
    Certificate of Outstanding Achievement, Data Analyst Intern at Brandiverse
  • 2019–2023
    Multiple Academic Merit Scholarships and recognitions as Internshala Student Partner (ISP) and Synergy Team Lead, VIT

Educations

Ph.D. Computer Science, University of Missouri, Columbia

2023.08 – 2027.06 (expected)

  • Co-advised by Dr. Jianlin Cheng and Dr. Tanu Malik
  • Research focus: Trustworthy and Efficient LLMs, Self-Correcting and Evolving Language Models, Evaluation in LLMs
  • Google PhD Fellowship nominee (NLP), Outstanding Student Award recipient
  • GPA: 3.9/4.0
  • Supported by NASA, National Science Foundation, and Department of Defense grants for research in scientific LLMs and scalable AI infrastructure

M.S. Computer Science, University of Missouri, Columbia

2023.08 – 2025.05

  • Thesis: Deploying LLM-as-a-Service in Kubernetes HPC Clusters
  • Advisors: Dr. Grant Scott and Dr. Jianlin Cheng
  • GPA: 4.0/4.0
  • Built Helm/Kubernetes-based LLM inference pipelines in HPC environments
  • TA for Full-Stack MERN Development (mentored 100+ students)

Bachelor of Technology Computer Science and Engineering with Specialization in Data Analytics, Vellore Institute of Technology, India

2019.05 – 2023.05

  • Excellence in Research and Best Department Thesis
  • Thesis: Multilingual Sentiment Analysis of Social Media Posts on KOO platform
  • Core member of Synergy Team, Internshala Student Partner and Student Ambassador, Runner-up in VIT AI Tech-Thon
  • Internship/volunteer work: Adobe (AI Evangelist), Brandiverse (Data Analyst)

Intermediate (+2) – MPC, Altitude College, Hyderabad, India

2017.06 – 2019.04

  • Engineering & analytical skill development through JEE prep
  • 1554 in MIT Entrance Test, secured 88% in JEE Mains, qualified for JEE Advanced

10th Standard – SSC, City Central School, India

2017.03

Academic Service

  • Conference Volunteer Reviewer: ICML (25, 24, 23), ACL (25, 24, 23), ICCV (25), CVPR (25), ICLR (25), AAAI (25), ICASSP (25), NeurIPS (24), EMNLP (24), ECCV (25), IJCAI (25), NAACL (25)
  • Journal reviewer: TPAMI, JVCI, TIP, TMLR

Teaching Experience

  • Fall 2025, Fall 2024, Spring 2024, Fall 2023 – TA for Web Development

Professional Experience

University of Missouri – Radiant Lab — Research Assistant

Jan 2024 – Present

  • Reproducible Scientific Containers: Enhancing data-savvy, provenance-tracking containers for collaborative model analytics, integrating LLMs to automate debugging and improve reproducibility (NASA Funded)
  • AI Trustworthiness & Self-Reflecting LLMs: Designing models that can monitor, verify, and revise their own reasoning in real time, enabling more reliable and adaptive AI systems
  • Skills Gained: Distributed systems, container orchestration, reproducibility frameworks, advanced NLP integration

Advisor: Dr. Tanu Malik


University of Missouri – Data Intensive Computing Lab — Research & Teaching Assistant

Feb 2024 – Present

  • Hallucination Detection Model: Developed hybrid frameworks for domain-tuned LLMs in material science, improving factual consistency by 30% (DoD Funded)
  • Scalable NLP Deployment: Designed Helm charts for scalable NLP deployment in HPC environments (NSF Funded)
  • Teaching Excellence: Supported 115+ students in Web Development (MERN stack), mentoring and evaluating projects
  • Expertise Gained: Kubernetes, CI/CD, Helm, advanced NLP frameworks

Advisors: Dr. Grant Scott & Dr. Jianlin Cheng


University of Missouri – PAAL Lab — Research Assistant

Aug 2023 – Jan 2024

  • UAV-based Crop Analysis: Led UAV-based crop analysis team, improving accuracy of UAV data processing by 40% with deep learning and HPC-driven workflows
  • Geospatial Analysis: Developed focus enhancement models and performed advanced geospatial analysis (Vegetation Indices, Mapping and Image Stitching) using QGIS

Adobe Research — Volunteer Research Intern

May 2022 – Jan 2023

  • Web Scraping & Information Extraction: Researched web scraping and information extraction as part of NLP team, focusing on large-scale data processing
  • Expertise Gained: Large-scale data processing, visualization, and client-facing research workflows

Mentor: Nanda Kishore


Brandiverse — Data Analyst Intern

May 2020 – Jul 2020

  • Marketing Intelligence: Transformed customer sentiment analysis with sophisticated NLP pipelines, driving strategic marketing improvements

Recognition: Certificate of Outstanding Achievement


Internshala — Student Partner (ISP)

May 2020 – Dec 2020

  • Campus Leadership: Orchestrated career development initiatives, bridging student-industry gaps and promoting professional growth

Thanks for stopping by! Feel free to explore my work or connect with me: