Skip to main content
Article
Fast and Memory-Efficient TFIDF Calculation for Text Analysis of Large Datasets
Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. IEA/AIE 2021. Lecture Notes in Computer Science (2021)
  • Dr. Samah A. Senbel
Abstract
Term frequency – Inverse Document Frequency (TFIDF) is a vital first step in text analytics for information retrieval and machine learning applications. It is a memory-intensive and complex task due to the need to create and process a large sparse matrix of term frequencies, with the documents as rows and the term as columns and populate it with the term frequency of each word in each document.
The standard method of storing the sparse array is the “Compressed Sparse Row” (CSR), which stores the sparse array as three one-dimensional arrays for the row id, column id, and term frequencies. We propose an alternate representation to the CSR: a list of lists (LIL) where each document is represented as its own list of tuples and each tuple storing the column id and the term frequency value. We implemented both techniques to compare their memory efficiency and speed. The new LIL representation increase the memory capacity by 52% and is only 12% slower in processing time. This enables researchers with limited processing power to be able to work on bigger text analysis datasets.
Publication Date
Summer 2021
Citation Information
Samah A. Senbel. "Fast and Memory-Efficient TFIDF Calculation for Text Analysis of Large Datasets" Advances and Trends in Artificial Intelligence. Artificial Intelligence Practices. IEA/AIE 2021. Lecture Notes in Computer Science (2021)
Available at: http://works.bepress.com/samah-senbel/31/