Jesse Ellis
February 26, 2025 11:50
NVIDIA’s NIM micro service to LLM changes the scientific literature review process, providing improved speed and accuracy in information extraction and classification.
NVIDIA’s innovative NIM micro service for LLMS (Lange Language Models) is ready to greatly improve the efficiency of scientific literature review. This development deals with a traditional labor -intensive process that compiles important systematic reviews for both beginners and skilled researchers in understanding and exploring science. According to the NVIDIA blog, this micro service can simplify the review process by quickly extracting and synthesizing information from a wide range of databases.
Challenge in the traditional review process
The existing approach to the review of the literature is a task that takes a lot of time to collect, read and summarize, and have a lot of time. The interdisciplinary characteristics of many research topics make the process more complicated, and often require expertise beyond the main field of researchers. In 2024, the Web of Science Database indexed more than 218,650 review articles to emphasize the important role in academic research.
LLM is used to improve efficiency
The adoption of LLM indicates a pivotal change in the way the literature review is carried out. NVIDIA participated in the AI Codefest Australia to improve the method for placing NIM micro services in cooperation with AI experts. This effort focused on optimizing LLM for literature analysis, allowing researchers to process complex data sets more effectively. The research team of ARC Special Research Initiative, which secures the Economic Future of Antarctica (SAEF), has successfully implemented Q & A applications using NVIDIA’s LLAMA 3.1 8b to extract relevant data from a wide range of literature on environmental changes through NIM micro service.
Significant improvement of processing
The initial test of the system showed the potential to significantly reduce the time required for information extraction. By using parallel processing and NV-Ingest, the team has increased processing speed by 25.25 times, reducing the time to handle the database of science engineer to 30 minutes using the NVIDIA A100 GPU. This efficiency represents more than 99% time saving compared to the existing manual method.
Automatic classification and future direction
In addition to extracting information, the team used LLM to explore the automatic articles that make up complex data sets. The LLAMA-3.1-8B emphasis, which is fine adjusted with the LORA adapter, enables the fast classification of the article, reduced to 2 seconds per article compared to manual efforts. Future plans include purifying workflows and user interfaces to facilitate extensive access and distribution of these features.
Overall, NVIDIA’s approach shows a variant of AI in simplifying the research process, and improves the ability for scientists to participate in the field of interdisciplinary research at faster speed and depth.
Image Source: Shutter Stock