Author: Tanya Malhotra

205 POSTS0 COMMENTS
Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning. She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.

SalesForce AI Introduces CodeChain: An Innovative Artificial Intelligence Framework For Modular Code Generation Through A Chain of Self-Revisions With Representative Sub-Modules

A major objective in the study of Artificial Intelligence is the development of AI systems that can provide useful computer programs to address challenging...

Meet ScaleCrafter: Unlocking Ultra-High-Resolution Image Synthesis with Pre-trained Diffusion Models

The development of image synthesis techniques has experienced a notable upsurge in recent years, garnering major interest from the academic and industry worlds. Text-to-image...

Researchers from Stanford, NVIDIA, and UT Austin Propose Cross-Episodic Curriculum (CEC): A New Artificial Intelligence Algorithm to Boost the Learning Efficiency and Generalization of...

Sequential decision-making problems are undergoing a major transition due to the paradigm shift brought about by the introduction of foundation models. These models, such...

Amazon Researchers Present a Deep Learning Compiler for Training Consisting of Three Main Features- a Syncfree Optimizer, Compiler Caching, and Multi-Threaded Execution

One of the biggest challenges in Machine Learning has always been to train and use neural networks efficiently. A turning point was reached with...

Researchers from Princeton Introduce ShearedLLaMA Models for Accelerating Language Model Pre-Training via Structured Pruning

Large Language Models (LLMs) have become extremely popular because of their outstanding capabilities in a variety of natural language tasks. Though they are growing...

This Artificial Intelligence Survey Research Provides A Comprehensive Overview Of Large Language Models Applied To The Healthcare Domain

Natural language processing (NLP) systems have long relied heavily on Pretrained Language Models (PLMs) for a variety of tasks, including speech recognition, metaphor processing,...

How Can Transformers Handle Longer Inputs? CMU and Google Researchers Unveil a Novel Approach (FIRE): A Functional Interpolation for Relative Position Encoding

Transformer-based Language Models have uplifted the domain of Natural Language Processing (NLP) in recent years. Their capacity to comprehend and produce text that is...

Researchers at Stanford Propose DDBMs: A Simple and Scalable Extension to Diffusion Models Suitable for Distribution Translation Problems

Diffusion models have recently seen much success and attention in the Artificial Intelligence community. Belonging to the family of generative models, these models can...

A New Machine Learning Research from MIT Shows How Large Language Models (LLMs) Comprehend and Represent the Concepts of Space and Time

Large Language Models (LLMs) have shown some incredible skills in recent times. The well-known ChatGPT, which has been built on the GPT's transformer architecture,...

Meet Waymo’s MotionLM: The State-of-the-Art Multi-Agent Motion Prediction Approach that can Make it Possible for Large Language Models (LLMs) to Help Drive Cars 

Autoregressive language models have excelled at predicting the subsequent subword in a sentence without the need for any predefined grammar or parsing concepts. This...

Can We Truly Trust Artificial Intelligence AI Watermarking? This AI Paper Unmasks the Vulnerabilities in Current Deepfake Method’s Defense

The rapid advancement in the field of generative Artificial Intelligence has brought about significant changes in the landscape of digital content creation. These AI...

Google DeepMind Researchers Introduce Promptbreeder: A Self-Referential and Self-Improving AI System that can Automatically Evolve Effective Domain-Specific Prompts in a Given Domain

Large Language Models (LLMs) have gained a lot of attention for their human-imitating properties. These models are capable of answering questions, generating content, summarizing...