Overview

Project Gutenberg (PG) is a volunteer effort to digitize and archive cultural works, to 'encourage the creation and distribution of eBooks'. It was founded in 1971 by Michael S. Hart and is the oldest digital library. This dataset is a collection of the top 1000 most popular books on Project Gutenberg, as determined by downloads. Each book has information about its authorship, publication date, congressional classication, and a few other fields. It also has some simple, computed statistics based on common metrics such as sentiment analysis, Flesch Kincaid Reading level, and average sentence length.

https://www.gutenberg.org/ebooks/search/?sort_order=downloads

Explore Structure




Index Type Example Value
0 str "text/plain"
... ... ...
Index Type Example Value
0 dict { }
... ... ...
Index Type Example Value
0 str "PR"
... ... ...
Index Type Example Value
0 str "Sisters -- Fiction"
... ... ...
Index Type Example Value
0 str "en"
... ... ...
Key Type Example Value Comment
"url" str "https://www.gutenberg.org/ebooks/1342"
"downloads" int 36576 The number of times this book has been downloaded from Project Gutenberg, as of the last update (circa Spring 2016).
"id" int 1342 Every book on Project Gutenberg has a unique ID number. You can use this number to check the book on project gutenberg (e.g., book 110 is http://www.gutenberg.org/ebooks/110).
"rank" int 1 The rank of this book in comparison to other books on Gutenberg, measured by number of downloads. A lower rank indicatest that that book is more popular.
"formats" dict { }
Key Type Example Value Comment
"polysyllables" int 4603 The number of words that have 3 or more syllables.
"characters" int 586794 Characters are letters and symbols in a text, not the number of people.
"average sentence length" float 18.0
"words" int 121533
"sentences" int 6511
"syllables" float 170648.1
"average sentence per word" float 0.05
"average letter per word" float 4.83
Key Type Example Value Comment
"metrics" dict { }
"bibliography" dict { }
"metadata" dict { }
Key Type Example Value Comment
"death" int 1817 The recorded year of the author's death. If their death year is unknown, it is replaced with "0".
"name" str "Austen, Jane"
"birth" int 1775 The recorded birth year of the author. If their birth year is unknown, it is replaced with "0".
Key Type Example Value Comment
"difficulty" dict { }
"statistics" dict { }
"sentiments" dict { }
Key Type Example Value Comment
"total" int 8 Project Gutenberg makes books available in a wide variety of file formats, including raw text files, HTML web pages, audio books, etc. This field indicates the number of ways that this book is available.
"types" list [ ]
Key Type Example Value Comment
"polarity" float 0.136713377605 Sentiment analysis attempts to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. Polarity in particular refers to how positive or negative the author is towards the content.
"subjectivity" float 0.52223914947 Sentiment analysis attempts to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. Subjectivity (as opposed to Objectivity) in particular refers to whether the text is opinionated or attempts to stay factual.
Key Type Example Value Comment
"publication" dict { }
"title" str "Pride and Prejudice"
"author" dict { }
"languages" list [ ]
"subjects" list [ ]
"congress classifications" list [ ]
"type" str "Text"
Key Type Example Value Comment
"flesch reading ease" float 70.13 The 'Flesch Reading Ease' uses the sentence length (number of words per sentence) and the number of syllables per word in an equation to calculate the reading ease. Texts with a very high Flesch reading Ease score (about 100) are very easy to read, have short sentences and no words of more than two syllables.
"automated readability index" float 10.7 The Automated Readability Index is a number indicating the understandability of the text. This number is an approximate US Grade Level needed to comprehend the text, calculated using the characters per word and words per sentences.
"coleman liau index" float 10.73 The Coleman Liau Index is a number indicating the understandability of the text. This number is an approximate US Grade Level needed to comprehend the text, calculated using characters instead of syllables, similar to the Automated Readability Index.
"gunning fog" float 9.2 The Gunning Fog Index measures the readability of English writing. The index estimates the years of formal education needed to understand the text on a first reading. The formula is calculated using the ratio of words to sentences and the percentage of words that are complex (i.e. have three or more syllables).
"linsear write formula" float 13.5 Linsear Write is a readability metric for English text, purportedly developed for the United States Air Force to help them calculate the readability of their technical manuals. It was designed to calculate the United States grade level of a text sample based on sentence length and the number words used that have three or more syllables.
"dale chall readability score" float 5.7 The Dale Chall Readability Score provides a numeric gauge of the comprehension difficulty that readers come upon when reading a text. It uses a list of 3000 words that groups of fourth-grade American students could reliably understand, considering any word not on that list to be difficult. This number is an approximate US Grade Level needed to comprehend the text.
"flesch kincaid grade" float 7.9 The "Flesch-Kincaid Grade Level Formula" presents a score as a U.S. grade level, making it easier to understand. It uses a similar formula to the Flesch Reading Ease measure.
"smog index" float 3.1 The SMOG grade is a measure of readability that estimates the years of education needed to understand a piece of writing. SMOG is the acronym derived from "Simple Measure of Gobbledygook". Its formula is based on the number of polysyllables (words with three or more syllables) and the number of sentences.
"difficult words" int 9032 The number of words in the text that are considered "difficult"; that is, they are not on a list of 3000 words that are considered understandable by fourth-grade American students.
Key Type Example Value Comment
"month name" str "June"
"full" str "June, 1998"
"year" int 1998 The year when the book was published according to Project Gutenberg. Keep in mind that this may not be the original publication date of the work, just that particular edition of the work. Notice that missing values have been coded as "0".
"day" int 1 The day of the month when the book was published. Notice that missing values have been coded as "0".
"month" int 6 The month of the year when the book was published; 1 corresponds to January, 2 to February, etc. Notice that missing values have been coded as "0".

Downloads

Download all of the following files.

Usage

This library has 1 function you can use.
import classics
list_of_book = classics.get_books()
Additionally, some of the functions can return a sample of the Big Data using an extra argument. If you use this sampled Big Data, it may be much faster. When you are sure your code is correct, you can remove the argument to use the full dataset.
import classics
# These may be slow!
list_of_book = classics.get_books(test=True)

Documentation

 classics.get_books(test=False)

Returns books from the dataset.