Explored and created a web-hosted demo for a zero-shot deepfake audio generation model to generate audio from text using embeddings of short reference audio clips
Explored concept drift and built an sklearn-like toolkit to detect drift in image, text or audio data using multiple drift detection methods for monitoring and extending life of production models
Extracted HTML UI elements from wireframe drawings of websites and website screenshots using advanced image pre-processing and confidence cutoff variation. Achieved $2^{nd}$ rank in the hackathon.
Participated in the Twitter Bias Hackathon and highlighted bias instances in Twitter's Saliency model used to crop images building a case study with Tokyo Olympics images as example. Achieved $9^{th}$ rank in the hackathon.