LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks

LongBench v2 is a new test to see how well AI can understand and answer questions about really long texts, like books, articles, and code. The test has over 500 questions, and even experts have trouble answering them quickly. The test covers lots of different types of questions, like figuring out who did a crime in a story, translating a new language, and understanding how a computer program works. The test is hard because it makes AI think deeply about the information and not just find simple answers. The researchers who made LongBench v2 hope it will help make AI even smarter and better at understanding complicated things. https://arxiv.org/pdf/2412.15204

Read More

SWE-Bench: Evaluating Language Models on Real-World GitHub Issues

This research paper introduces SWE-Bench, a new way to test how good large language models are at solving real problems with computer code. It uses real problems and code from GitHub, a website where programmers share and work on code together. These problems are more complex than what language models are usually tested on, requiring them to understand lots of code and make changes across multiple files. Researchers created SWE-Bench Lite, a smaller version of SWE-Bench, and SWE-Llama, a special language model trained to fix code. The study found that even the best language models could only solve the easiest problems, showing that there’s still a long way to go before they can be really helpful to programmers. The paper also suggests using tools that measure how complex code is to better understand how language models are learning. https://arxiv.org/pdf/2310.06770

Read More

FrontierMath: A Benchmark for Advanced Mathematical Reasoning in AI

This research paper introduces FrontierMath, a collection of very hard math problems designed to test how well AI can solve advanced math. The problems in FrontierMath are brand-new and cover many different areas of math, like algebra and calculus. The researchers found that even the smartest AI today can only solve a tiny fraction (less than 2%) of these problems. To make sure the problems were really tough, they asked famous mathematicians, including some who have won the highest prize in math, to look at them. These experts agreed that the problems were very difficult and would likely take AI many years to solve on their own. The paper also explains how FrontierMath was created, how AI are tested on the problems, and what kinds of math are included. The researchers hope that FrontierMath will help push AI to become better at solving complex math problems, which could eventually help mathematicians with their research. https://arxiv.org/pdf/2411.04872

Read More

GPQA: A Graduate-Level Google-Proof Q&A Benchmark

This research paper describes the creation and analysis of GPQA, a new set of multiple-choice questions designed to be very hard to answer, even with the help of Google. The questions cover advanced topics in biology, physics, and chemistry, and were written and checked for accuracy by experts with PhDs in those fields. The researchers made sure the questions were extra tough by having other experts, called non-experts, try to answer them using the internet. These non-experts also had PhDs, but in different subjects. The goal was to create questions that would be challenging even for very smart people who don’t have specific knowledge in the subject. The researchers also tested the questions on advanced AI systems, like GPT-4, to see how well they could answer them. They found that even with access to the internet, the AI systems struggled to do as well as the experts, showing just how difficult these questions really are. The researchers hope that GPQA will be a valuable tool for testing new ways to help people understand and use information from AI systems, especially when those systems are tackling really hard problems that even experts find challenging. https://arxiv.org/pdf/2311.12022

Read More

Monte Carlo Inference for Semiparametric Bayesian Regression

This excerpt from the Journal of the American Statistical Association talks about a new way to do Bayesian regression, a type of statistical analysis used to figure out the relationship between different things. Regular Bayesian regression can be tricky when the data doesn’t fit certain patterns. To make it easier to work with different types of data, this paper suggests using something called a transformation. A transformation is like changing the way the data looks so it’s easier to analyze. Imagine trying to fit puzzle pieces together – sometimes you need to turn or flip them to make them fit. The paper explains a new method for figuring out the best transformation to use and provides ways to use this method with different types of regression models, like linear regression and quantile regression. It also shows how well this method works with simulated and real data. Finally, the paper provides mathematical proof that this new approach is reliable and accurate. https://www.tandfonline.com/doi/epdf/10.1080/01621459.2024.2395586?needAccess=true

Read More

OpenAI o3 Breakthrough High Score on ARC-AGI Competition: Has AGI Been Achieved?

OpenAI has created a new AI model, called o3, that is much better at solving problems it has never seen before compared to older AI systems like GPT-3 and GPT-4. This is a big deal because for many years, AI researchers have been trying to create AI that can learn new things quickly, just like humans. o3 was tested on a special set of problems called ARC-AGI which are designed to be very hard for AI but easy for humans. Surprisingly, o3 was able to solve 75.7% of these problems, which is much higher than any other AI system has ever achieved. This means that o3 might be getting closer to having human-level intelligence, although it still makes mistakes on some easy problems. Researchers are excited about o3 because it shows that it is possible to build AI that can learn and adapt to new situations. https://arcprize.org/blog/oai-o3-pub-breakthrough

Read More

SciAgents: Automating Scientific Discovery

This research paper talks about a new computer program called SciAgents that can help scientists discover new things, especially about materials inspired by nature. SciAgents uses a special database called a knowledge graph that contains lots of scientific information about different materials and how they work. The program also uses large language models (LLMs) like ChatGPT, which are really good at understanding and using language. By combining information from the knowledge graph and LLMs, SciAgents can come up with new ideas for research projects. For example, it might suggest combining silk with pigments from dandelions to create a new material that is strong, colorful, and environmentally friendly. SciAgents can also explain its ideas in detail and even suggest experiments to test them. The researchers believe that SciAgents could help scientists make important discoveries much faster than they could on their own . https://onlinelibrary.wiley.com/doi/epdf/10.1002/adma.202413523

Read More

ModernBERT: A Highly Efficient Encoder-Only Transformer Model

This research paper introduces ModernBERT, a new and improved computer program that understands language. ModernBERT is like a student who has read tons of books and code and can now answer questions and find information really well. It’s especially good at finding information in long documents and understanding computer code, which are things that older programs struggled with. ModernBERT is also super fast and efficient, which means it can work quickly without using up a lot of computer power. The researchers tested ModernBERT on many different tasks, like understanding the meaning of sentences, finding relevant information in large amounts of text, and understanding computer code. The results showed that ModernBERT outperformed all the other programs, making it the best of its kind! https://arxiv.org/pdf/2412.13663

Read More

Enhancing LLM Reasoning with Argumentative Querying

This research paper introduces a new technique called Critical-Questions-of-Thought (CQoT) to help Large Language Models (LLMs), which are like super-smart computer programs, get better at solving logic and math problems. The idea is that by asking the LLM a series of “critical questions” based on how humans argue and reason, the LLM can double-check its work and avoid making mistakes. This is similar to how we carefully think through the steps of a math problem before writing down the final answer. The researchers tested CQoT on different LLMs and found that it really helped them improve their scores on challenging reasoning and math tests. This suggests that giving LLMs more “time to think” and encouraging them to use critical thinking strategies can help them become even smarter. https://arxiv.org/pdf/2412.15177

Read More

Qwen2.5 Technical Report

This report describes Qwen2.5, a group of large language models (LLMs) designed for a wide range of uses. Qwen2.5 has been significantly improved from earlier versions, using a massive dataset of 18 trillion words and phrases for training. This extensive training gives Qwen2.5 a strong understanding of general knowledge, specialized expertise, and reasoning abilities. It also excels in following instructions, analyzing structured data like tables and JSON files, and generating long texts. Qwen2.5 is available in various sizes, ranging from small models suitable for limited resources to larger models with billions of parameters, including specialized models for math and coding. The report highlights the rigorous evaluation process used to ensure Qwen2.5’s quality and its competitive performance compared to other leading LLMs, making it a powerful tool for various applications. https://arxiv.org/pdf/2412.15115

Read More
Back To Top