• alignment

    Alignment is a term used in connection with parallel corpora. A parallel corpus consists of a text and its translation into one or more languages. Parallel corpora need to be divided into segments. A segment usually corresponds to a sentence. Alignment refers to information that tells Sketch Engine which segment (sentence) in one language is the translation of which segment (sentence) in another language. The easiest way to provide the alignment information to Sketch Engine is to upload the data in a tabular format (e.g. Excel).  Alternatives that can account for more complex alignment are also available. see Build a parallel corpus
  • ARF – Average Reduced Frequency [ statistics ]

    a modified frequency which prevents the result to be excessively influenced by one part of the corpus (e.g. one or more documents) which contains a high concentration of the token. If the token is evenly distributed across the corpus, ARF and frequency per million will be comparable. see also ARF definition  
  • CAT tool

    A CAT tool is a computer assisted translation tool, a software that helps translators maintain consistency in terminology across their translation jobs and also aids the translation process by suggesting (or translating automatically) passages which the translator already translated in the past.
  • cluster

    a process of creating groups of words in the thesaurus or word sketch. Words are connected to their shared collocational behaviour. See more on the Clustering Neighbours documentation
  • collocate

    a part of a collocation that is not the node. A collocate is dependent on the node. The collocate strong and the node wind make up the collocation strong wind
    collocation
    collocate node
    strong wind
    icy wind
    cold wind
    The most typical collocates for every word in the language can be generated with the word sketch tool.
  • collocation

    a collocation is a sequence or combination of words that occur together more often than would be expected by chance (from Wikipedia|Collocation) A collocation, e.g. fatal error, typically consists of a node (error) and a collocate (fatal). Collocations can have different strengths, e.g. nice house is a weak collocation because both nice and house can combine with lots of other words, on the other hand, the Opera House is a strong collocation because it is very typical for opera to occur next to house and, at the same time, opera does not combine with many other words. In Sketch Engine, the tool to use for collocations is the word sketch. The strength of collocation is expressed by the logDice score.
  • comparable corpus [ corpus-types ]

    A comparable corpus is a corpus consisting of texts from the same domain in more languages. In contrast to a parallel corpus, the texts are not translations of each other and belong to the same domain with the same metadata. An example of a comparable corpus is corpus made from Wikipedia.
  • compile

    A corpus compilation refers to the processing of the corpus data (text) with the tools available for the language and converting the text into a corpus.Only a compiled corpus can be searched. see corpus compilation
  • concordance [ feature ]

    a list of all examples of the search word or phrase found in a corpus, usually in the format of a KWIC concordance with the search word highlighted in the centre of the screen and some context to the right and to the left see also KWIC
  • concordancer [ feature ]

    A concordancer is a tool (a piece of software) which searches a text corpus and displays a concordance. A concordancer is one of the features in Sketch Engine which allows for simple corpus searches as well as queries involving complex criteria that search for grammatical or lexical structures. see also concordance
  • cooccurrence [ text-analysis ]

    cooccurrence or co-occurrence is a term which expresses how often two terms from a corpus occur alongside each other in a certain order. It usually indicates words which together create a new meaning. We call them as phraseme or multi-word expression, e.g. black sheep or get on. Sketch Engine help to find such words with using the word sketch tool or the collocation search. Read more about further tools for text analysis.
  • corpus

    a large collection of texts used for studying language. A corpus is usually annotated (=word are labelled with information about the part of speech and grammatical category). The terms corpus and text corpus and language corpus are interchangeable. Using a corpus for any type of linguistic or language oriented work ensures the outcomes reflect the real use of the language. more on copora»
  • corpus architect

    an intuitive tool inside Sketch Engine for creating corpora from documents or the Web which does not require any expert knowledge. See the create your own corpus    page.
  • corpus manager

    a program used to manage text corpora, i.e. to build, edit, annotate and search corpora. Sketch Engine is the user interface to the corpus manager Manatee.
  • CQL

    The Corpus Query Language is a code used to set criteria for complex searches which cannot be carried out using the standard user interface controls. The criteria may include words or lemmas but also tags and other attributes, text types or structures. Conditions can be set for optional tokens or token repetition. [av_button label='Learn CQL' link='manually,https://www.sketchengine.eu/corpus-querying/' link_target='' size='small' position='center' label_display='' icon_select='yes-right-icon' icon_hover='aviaTBicon_hover' icon='ue875' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' admin_preview_bg='']
  • CSV

    a type of plain text document used for saving tabular data. It is seamlessly accepted by a large variety of applications and is therefore ideal for exporting Sketch Engine results to be used in other software. CSV can be opened directly in Microsoft Excel, Open Office, Google Documents and many others.
  • deduplication

    is a process during which repeating same texts are removed and the only first text of all same (duplicated) texts is kept. The deduplication process can be carried out at various levels, e.g. documents. It means that one whole document of two same ones will be removed.
  • disambiguation

    a process of identifying meanings of words (lemma, part of speech) when a word has multiple meanings. The result of this process is one word with one meaning.
  • distributional thesaurus [ feature ]

    an automatically produced thesaurus which identifies words that occur in similar contexts as the target word. It draws on the hypothesis of distributional semantics. The automatically produced thesaurus is available for each word in the corpus. more about automatic thesaurus The distributional thesaurus in Sketch Engine is available for every language and corpus that supports word sketches. Refer to user manual to learn to generate the thesaurus.
  • document

    A document (called a file in old corpora) is a generic name used in Sketch Engine to refer to any file, document or webpage the corpus is made up of. If a user uploads a file (such as .doc, .pdf, .txt), each of the files becomes a corpus document. If the user downloads content from the web, each web page becomes a corpus document. The beginning and end of each document is automatically marked with a structure, most typically with <doc></doc> but certain corpora may use a different convention such as British National Corpus which uses <bncdoc></bncdoc>. This can be checked on the corpus info page. A corpus can also be divided into documents by manually inserting document structures into the source text. see Corpus annotation
  • document frequency [ statistics ]

    The document frequency is the number of documents in which the word or phrase appears. If the corpus has 100 documents and 2 documents contain the word city: document number 7 contains 17 instances of city, document number 31 contains 6 instances of city, the document frequency of city is 2, because 2 documents contain the word. It is not important how many documents the corpus contains or how many times the word appears in total. The document frequency can be better suited for comparison in situations when the corpus contains a small number of documents with an extremely high frequency of particular words. see also frequency frequency per million ARF Statistics used in Sketch Engine
  • focus corpus

    In keyword and term extraction, the focus corpus is the corpus from which keywords and terms are extracted. Compare reference corpus.
  • freq/mill – frequency per million [ statistics ]

    a number of occurrences (hits) of an item normalised per million, also called as i.p.m. (instances per million). It is used to compare frequencies between corpora of different sizes. number of hits : corpus size in millions of tokens = frequency per million Example: A token found 10 times in a corpus of 1 million tokens will have a frequency per million equal to 10. A token found 100 times in a corpus of 100 million tokens will have a frequency per million equal to 1. The second token is less frequent. see also Statistics in Sketch Engine Frequency per million Average Reduced Frequency
  • frequency [ statistics ]

    Frequency (also absolute frequency) refers to the number of occurrences or hits. If a word, phrase, tag etc. has a frequency of 10, it means it was found 10 times or it exists 10 times. It is an absolute figure. It is not calculated using a specific formula. compare frequency per million see also ARF document frequency Statistics used in Sketch Engine
  • GDEX

    Good Dictionary Examples is a technology in Sketch Engine which can identify automatically sentences which are suitable as dictionary example sentences or as teaching examlpes, i.e. are illustrative and representative. The GDEX can be applied on any concordance. It will sort the lines and will place the ones with the best GDEX score to the top. The GDEX technology evaluates the sentences with respect to their length and complexity, safe topics, the presence of difficult and low-frequency words and other similar criteria specified in the GDEX configuration. more on GDEX
  • gender lemma [ attribute ]

    The gender lemma is an attribute used in connection with term extraction. Its purpose is to display terminology in the correct word form in languages which distinguish gender with adjectives and nouns. Lemma would produce a grammatically unacceptable word form combination. Examples Spanish
    word form lemma gender lemma
    cámaras compactas cámara compacto cámara compacta
    Russian
    Красной площади красный площадь Красная площадь
    Polish
    piłce nożnej piłka nożny piłka nożna
  • global subcorpus

    a subcorpus that is shared with all users. See instructions how to set the subcorpus shared all users»
  • glue

    A glue <g/> is a special structure inserted into a corpus to tell Sketch Engine that two tokens, which would otherwise be displayed with a space in between, should actually be displayed without a space. Typically do and n't will have glue between them to be displayed as don't. A glue does not have any other fuction, it is inserted for technical reasons only. It can, however, be used in CQL searches if there is a use case.
  • Grammatical relation

    A grammatical relation, or gramrel, refers to one column in the word sketch. Each column represents a category which displays collocates with the same relation to the search word, e.g. subjects of a verb or modifiers of a noun. Some columns may also display the usage statistics of the search word instead of collocates, e.g. the statistics of noun cases or verbal tenses of the search word.
  • header field

    various types of information associated with documents of a corpus, e.g. a corpus with documents from different domains can be structured according to these domains with a usage of header fields <doc domain> and their values "nameofdomain" = <doc domain="nameofdomain">
  • keyword

    Keywords is a concept used in connection with Keyword & Term extraction. Keywords are words (single-token items), that appear more frequently in the focus corpus than in the reference corpus. They can be used to identify what is specific to one corpus (focus corpus) or its subcorpus in comparison with another corpus (reference corpus) or its subcorpus. Comparisons can be also be made between two subcorpora of the same corpus or between the whole corpus and one of its subcorpora. Keywords can be extracted using the Keywords & Terms tool in Sketch Engine. Typically, the largest corpus in the language will be selected as the reference corpus. The user can set a different corpus or subcorpus as the reference corpus/subcorpus. see also term term extraction
  • KWIC

    KWIC is the acronym for Key Word in Context and refers to the red text highlighted in a concordance. The red text is the result that matches the search criteria. Such a concordance might be referred to as a KWIC concordance. The KWIC concordance is the preferred format for displaying concordance data because it is easy to observe the context to the right and left. KWIC and KWIC concordance Sketch Engine allows to change the KWIC view to sentence view which displays the complete sentence. The sentence view is only useful in specific cases such as when working with GDEX.
  • lc [ attribute ]

    [av_button label='Learn to understand attributes' link='post,24078' link_target='' size='small' position='right' label_display='' title_attr='' icon_select='yes' icon='ue81e' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' id='' custom_class='' admin_preview_bg=''] (also referred to as word_lc, word lowercase or word form lowercase) is a positional attribute assigned to of each token in the corpus. The lc attribute is a lowercased version of the word attribute: John becomes john, Apple becomes apple, BE becomes be. The lc attribute makes the upper case and lowercase version of each token identical. The lc attribute is used for case insensitive searching and analysis see also word form lemma (lowercase) list of attributes
  • learner corpus [ corpus-types ]

    A collection of texts produced by learners of a language used to study errors and mistakes made by learners of languages. Learner corpora in Sketch Engine can use both error and correction annotation. A special search interface is available to search by the former or the latter or both. see also Setting up a learner corpus
  • lemma [ attribute ]

    [av_button label='Learn to understand attributes' link='post,24078' link_target='' size='small' position='right' label_display='' title_attr='' icon_select='yes' icon='ue81e' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' id='' custom_class='' admin_preview_bg=''] Lemma is a positional attribute. It is the basic form of a word, typically the form found in dictionaries. A lemmatized corpus allows for searching for the basic form and include all forms of the word in the result, e.g. searching for lemma go will find go, goes, went, going, gone. Lemma in Sketch Engine is case sensitive so City and city are two different lemmas (City = the City of London; city = a common noun). The lemma of the first word of a sentence is always lowercase. Therefore, the search for lemma city will also find City but only in if City appears at the beginning of a sentence. A wordlist of lemmas is a frequency list where all of go, went, gone, goes, going are counted together and listed as go. A lemma search of go will find all of go, went, gone, goes, going. The concept of the lemma is not always clearly defined and may differ between languages (or even between two corpora in the same language). For example, in Sketch Engine, many, more, most are three different lemmas in English. On the other hand, in Czech, the same adjective which is also irregular mnoho, více, nejvíce share the same lemma hodně. The situation is even more complex with agglutinating languages such as Turkish, Hungarian or Japanese where it may not be easy to decide how many affixes should be removed to produce a lemma. The term stem often replaces the term lemma but stem often refers to the very core part of the word while several lemmas may share the same stem. In Sketch Engine, all corpora in the same language are processed using the same tools and therefore have the same lemmatization. Rare exceptions exist if the corpus was acquired from external sources including the original lemmatization. See also lemma-lc word form lempos list of attributes
  • lemma_lc [ attribute ]

    [av_button label='Learn to understand attributes' link='post,24078' link_target='' size='small' position='right' label_display='' title_attr='' icon_select='yes' icon='ue81e' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' id='' custom_class='' admin_preview_bg=''] lemma_lc is a positional attribute. It is a lemma converted to lowercase.   apple and Apple are treated as the same thing. It is used for case insensitive searching and case insensitive analysis. see lemma
  • Lemmatization

    Lemmatization is a process of assigning a lemma to each word form in a corpus using an automatic tool called a lemmatizer. Lemmatization bring the benefit of searching for a base form of a word and getting all the derived forms in the result, e.g. searching for go will also find goes, went, gone, going. See also PoS tagger stemming
  • lempos [ attribute ]

    [av_button label='Learn to understand attributes' link='post,24078' link_target='' size='small' position='right' label_display='' title_attr='' icon_select='yes' icon='ue81e' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' id='' custom_class='' admin_preview_bg=''] Lempos is a positional attribute. It is a combination of lemma and part of speech (pos) consisting of the lemma, hyphen and a one-letter abbreviation of the part of speech, eg. go-vhouse-n. The part of speech abbreviations differ between corpora. Lempos is case sensitive, house-n is different from House-n. see also lempos_lc lemma list of attributes
  • lempos_lc [ attribute ]

    [av_button label='Learn to understand attributes' link='post,24078' link_target='' size='small' position='right' label_display='' title_attr='' icon_select='yes' icon='ue81e' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' id='' custom_class='' admin_preview_bg=''] lempos_lc is a positional attribute. It is a lowercased version of lempos. All uppercase letters are converted to lowercase, thus House-n becomes identical with house-n. It is used for case insensitive searching and analysis. see also lempos list of attributes
  • likelihood [ statistics ]

    a function of parameters of a statistical model, it plays a key role in statistical inference and is the basis for the log-likelihood function. see Statistics in Sketch Engine
  • log-likelihood [ statistics ]

    one of the functions used in computed statistics of Sketch Engine. It is the association measures based on the likelihood function, using in tests for significance (see the log-likelihood calculator and more details)
  • logDice [ statistics ]

    a statistic measure for identifying collocations. It expresses the typicality of the co-occurence of the node and the collocate. It is used in the word sketch feature and also when computing collocations from a concordance. It is only based on the frequency of the node and the collocate and the frequency of the whole collocation. logDice is not affected by the size of the corpus and, therefore, can be used to compare the scores between different corpora. logDice is the preferred option when working with large corpora.   see also logDice in Statistics used in Sketch Engine A Lexicographer-Friendly Association Score (paper) T-score MI score
  • Longest-commonest match

    The longest-commonest match is a concept coined by Adam Kilgarriff to name the most common realisation of a collocation, i.e. the chunk of language in which the collocation appears most frequently. The longest-commonest match is part of the word sketch result screen to facilitate the understanding of how the collocation typically behaves.
  • longtag [ attribute ]

    Longtag is a detailed part-of-speech tag which usually contains more information than tag. Some corpora have tags containing only basic information on parts of speech and also attribute longtags consist of detailed grammatical information such as case, number, gender, etc. The longtangs are available in Estonian corpus etTenTen or Turkis corpus trTenTen.
  • metadata

    information about the texts in the corpus: for example, year of publication, author name, publishing house, medium (written, spoken), register (formal, informal) etc. Metadata are automatically converted to text types in Sketch Engine. see Annotate a corpus  
  • MI Score [ statistics ]

    The Mutual Information score expresses the extent to which words co-occur compared to the number of times they appear separately. MI Score is affected strongly by the frequency, low-frequency words tend to reach a high MI score which may be misleading. This is why Sketch Engine allows setting a frequency limit so that low-frequency words can be excluded from the calculation. In most cases T-score is more useful than MI score. see Concordance - Collocations see Statistics in Sketch Engine compare T-score
  • minimum sensitivity [ statistics ]

    a statistics measure similar to logDice which is the minimum of the two following numbers:

    • the number of co-occurrences divided by the frequency of the collocate
    • the number of co-occurrences divided by the frequency of the node word

    The minimum sensitivity number grows with a high number of co-occurrences and falls with a high number of occurrences of the individual words (node word or collocate).

  • multilevel list

    a list sorted at more than one level e.g. a frequency list sorted by word form followed by lemma and then tag, see this multilevel list in the BAWE corpus.
  • n-gram

    is a sequence of a number of items (bigram = 2 items , trigram = 3 items ...n-gram = n items). An item can refer to anything (letter, digit, syllable, token, word or others), . In the context of corpora and corpus linguistics, ngrams typically refer to tokens (or words). In linguistics, ngrams are sometimes referred to as MWEs, i.e. multiword expressions. Generating a list of the most frequent n-grams will help us linguistic phenomena that might go unnoticed when using other tools. Ngrams can identify discourse markers or chunks of language which should be taught/learnt as fixed phrases in leanguage teaching. The toold to generate ngrams is the N-gram tool in Sketch Engine.
  • node

    (talking about collocations) central word in a collocation, e.g. strong wind consists of the collocate strong and the node wind (talking about concordances) the search word or phrase, sometimes called a query, appears in the centre of a KWIC concordance or highlighted in other types of concordances
  • non-word

    Non-words (also spelt nonwords) are tokens which do not start with a letter of the alphabet. Examples of non-words are numbers, punctuation but also tokens such as 25-hour, 16-year-old, !mportant, 3D. Tokens such as post-1945, mp3 or CO2 are normal words because they start with a letter. (There might be rare cases when the corpus author used a different definition in their corpus. The definition is part of the corpus configuration file.)
  • overall score [ statistics ]

    score of the relation based on logDice in word sketches. The score is displayed in the header of each column of the relation.
  • parallel corpus [ corpus-types ]

    A parallel corpus consists of the same text translated into one or more languages. The texts are aligned (matching segments, usual sentences, are linked). The corpus allows searches in one or both languages to look up or compare translations. parallel_key
  • PoS

    part of speech, some typical examples of parts of speech are: noun, adjective, verb, adverb etc.
  • POS tag [ attribute ]

    A POS tag is the same as tag.
  • POS tagger

    POS (part of speech) tagging is a process of annotating each token with a tag carrying information about the part of speech and often also morphological and grammatical information such as number, gender, case, tense etc. The automatic tagging tool is called a tagger or POS tagger. See also lemmatization stemming
  • positional attribute

    [av_button label='Learn to understand attributes' link='post,24078' link_target='' size='small' position='right' label_display='' title_attr='' icon_select='yes' icon='ue81e' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' id='' custom_class='' admin_preview_bg=''] A positional attribute is information added to each token in a corpus, e.g. its lemma (basic form of a word) or part of speech. Attributes differ between corpora and even between corpora in the same language. For example, the attributes of the plural of the English word dog might look like this.
    word lemma tag lempos
    dogs dog n dog-n
    Attributes available in the corpus  are listed on the corpus statistics and detail page see also list of positional attributes  
  • preloaded corpus [ corpus-types ]

    a ready-to-use corpus included in Sketch Engine subscription or Trial access, not created by a user, e.g. British National Corpus
  • query

    a sequence of characters or words or their combinations inputed by the user in order to retrieve a concordance. Often, the word query is not restricted to the concordance only but can also refer to any type of search or criteria uses in connection with any Sketch Engine feature, i.e. Word Sketch, thesaurus, word list etc.
  • reference corpus

    reference corpus is used in keyword extraction and term extraction. It is the corpus to which the focus corpus is compared. Usually, the same corpus is used as the keyword reference corpus and the term reference corpus but different corpora can also be used. When using the Keywords & Terms tool in Sketch Engine, the user can decide to set a different copora as a reference corpora.   see also term term extraction
  • regular expressions

    a collection of special symbols that can be used to search for patterns rather than specific characters, e.g. to find all words starting, containing or ending in a specific sequence of characters, for example .*tion will find all words ending in tion and having an unlimited number of characters at the beginning read more»  
  • relative text type frequency

    compares the frequency in a specific text type (part of corpus) to the whole corpus or compares frequencies in different text types (parts of corpus) even if they are not the same size. Thus the user can see whether the search word(s) is typical only for a specific text type (e.g. in newspapers only) but not in the rest of the corpus. The number is relative frequency of the query result divided by relative size of the particular text type. It can be interpreted as “how much more/less often is the result of the query in this text type in comparison to the whole corpus”. Higher frequency means higher value, bigger text type size means lower value. E.g. The word 'test' has 2000 hits in the corpus. 400 of them are in the text type “Spoken” and this text type represents 10 % of the corpus. Then the Relative Text Type frequency will be (400 / 2000) / 0.1 = 200 % and it means 'test' is twice as common in “Spoken” than in the whole corpus. see also Statistics in Sketch Engine
  • salience [ statistics ]

    a statistical measure of the significance of a specific token in the given context. This is measured with logDice, for more information, see section 3 of Statistics used in Sketch Engine)
  • search attribute

    the attribute that is used for the search and creating a word list. You can have the word list of words, lemmas, tags, etc.
  • search span

    the number of tokens either side of the node that will be matched for filtering concordance. The set search span from -5 to 5 means filter all concordance lines which containing a requirement of the filter in the range of 5 tokens around the node.
  • segment

    Segments refer to the parts into which a parallel (multilingual) corpus is divided for the purpose of alignment. Alignment means that the corpus contains information about which segment in one language is a translation of which segment in another language. Segments typically correspond to sentences but some corpora can be aligned at a paragraph or document level. The shorter the segments, the easier is to locate the translated word or phrase in the segment.
  • simple math [ statistics ]

    the simple formula used for the computation and identification of terms and keywords. see Simple math.
  • stem [ attribute ]

    A stem is a part of a word without its affixes (suffixes, prefixes, etc.). Stems do not have to be valid word forms, e.g. stem hav for the word form having, in comparison to lemma have for the word form having. Stems are used instead of lemmas or in addition to lemmas with languages whose morphology requires it. An example are agglutinating languages such as Turkish, Hungarian or Japanese.
  • stemming

    stemming is the process during which a word reduces its affixes (suffixes, prefixes, etc.) and finally, the stem only remains. Stemming is used to detect related words with the same stem, the word root which does not change in any case, number or tense. The word stems are available in Portuguese corpus ptTenTen or Turkis corpus trTenTen. This analysis is processed with tools call stemmers. See also PoS tagger lemmatization
  • structure

    a corpus structure refers to the segments or parts into which a corpus can be divided. Typically, a corpus is divided into sentences, paragraphs and documents but the corpus author can introduce various other structures to allow the analysis to focus on smaller or larger parts of the corpus. see a list of common corpus structures see Dividing a corpus into smaller parts and annotating them
  • subcorpus

    a corpus can be subdivided into an unlimited number of parts called subcorpora. Subcorpora can be used to divide the corpus by the type (fiction, newspaper), media (spoken, written) or time (e.g. by years) or by any other criteria. A subcorpus can also be created from a concordance by including all concordance lines and the documents they come from into a subcorpus. A subcorpus can be selected on the advanced tab of each tool. Selecting a corpus will restrict the search or the analysis to only this subcorpus. How to create a subcorpus»
  • T-score [ statistics ]

    T-score expresses the certainty with which we can argue that there is an association between the words, i.e. their co-occurrence is not random. The value is affected by the frequency of the whole collocation which is why very frequent word combinations tend to reach a high T-score despite not being significant collocations. In most cases, T-score is more reliable or more useful than MI Score. see Concordance - collocations see Statistics in Sketch Engine compare MI Score
  • tag [ attribute ]

    (also called part-of-speech tag, POS tag or morphological tag) is a label assigned to each token in an annotated corpus to indicate the part of speech and grammatical category. The tool used to annotate a corpus is called a tagger. A collection of tags used in a corpus is called a tagset. The most frequently used tags in a corpus are listed on the corpus information page with a link to the complete tagset. Our blog post on POS tags explains how they work.
  • tagset

    (called also tag set) is a list of part-of-speech tags used in one corpus. In Sketch Engine, corpora in the same language tend to use the same tagset but exceptions exist. To check the tagset used, access Corpus statistics and details. See our blog about POS tags.
  • TBL

    application in Sketch Engine for collecting usage-example sentences to build dictionaries. Find more on the Tick Box Lexicography page
  • term

    Terms is a concept used in connection with Keywords & Terms tool. A term is a multi-word expression (consisting of several tokens) which appears more frequently in one corpus (focus corpus) compared to another corpus (reference corpus) and, at the same time, the expression has a format of a term in the language. The format is defined in a term grammar which is specific for each language. The term grammar typically focusses on identifying noun phrases. The extracted terms are typical of the content of the corpus and can be used to identify the topic of the corpus. also see term extraction keywords
  • term base

    In connection with CAT tools, a term base is a database of subject-specific terminology and other lexical items which need to be translated consistently. The CAT tool uses the term base to check the consistency of translation, to look for untranslated segments, and to suggest (or automatically supply) translations of the terms from the database.
  • term extraction

    the process of identifying subject specific vocabulary in a subject specific text usually using specialized software. The finding of one-word and multi-word terms in Sketch Engine is based on a comparison with the frequency of these words and phrases in a reference corpus.
  • term grammar

    A term grammar is a collection of rules written in CQL which define the lexical structures, typically noun phrases, which should be included in term extraction. The term grammar uses POS tags and this is why term extraction is only available for tagged corpora. The use of a term grammar ensures clean term extraction result which requires very little post editing. see also term keyword Best term extraction (blog)
  • text analysis [ text-analysis ]

    text analysis (also content analysis) is a method for analyzing texts in order to gain information from them. The result of the content analysis is structured data which can be used for further processing. Sketch Engine offers a one-page automatic summary of a word's collocations with the word sketch feature. See also other text analysis tools.
  • text mining [ text-analysis ]

    text mining is an automatic process of extracting information from text, such as keywords of a text or its source(s). The corresponding tools in Sketch Engine are WebBootCaT for creating corpora from the web or keywords and terms extraction which finds terminology in your texts. Read about other text analysis tools.
  • text type

    a text type refers to values assigned to structures (e.g. documents, paragraphs, sentences and others) inside a corpus. Text types are sometimes called metadata or headers. Text types can refer to the source (newspaper, book etc.), medium (spoken, written), time (year, century) or any other type of information about text. Not all corpora have documents annotated for text types. Corpora can be divided into subcorpora based on text types and searches and other analysis can be performed only on texts belonging to the selected text type. Conventions for inserting metadata manually
  • token

    Token is the smallest unit that each corpus divides to. Typically each word form and punctuation (comma, dot, ...) is a separate token (but don't  in English consists of 2 tokens). Therefore, corpora contain more tokens than words. Spaces between words are not tokens. A text is divided into tokens by a tool called a tokenizer which is often specific for each language.
  • tokenization

    Tokenization is the automatic process of separating text into tokens.
  • tokenizer

    A tokenizer is a tool (software) used for dividing text into tokens. A tokenizer is language specific and takes into account the peculiarities of the language, e.g. don't in English is tokenized as two tokens. Sketch Engine contains tokenisers for many languages and also a universal tokenizer used for languages not yet supported by Sketch Engine. The universal tokenizer only recognizes whitespace characters as token boundaries ignoring any language specific rules. This, however, is sufficient for the use of many Sketch Engine features.
  • translation memory

    A translation memory is a database inside a CAT tool which holds segments of text translated in the past. The CAT tool can suggest (or automatically supply) translations based on matching text from the translation memory.
  • trends

    Trends is a feature used for diachronic analysis, i.e. for identifying how the frequency of the word (or other attributes) changes over time. read more
  • UMS

    feature available to users with local installation for the administration of users and corpora.
  • user corpus [ corpus-types ]

    a corpus created by a user. Users can create corpora by uploading their own data or using Sketch Engine to collect data from the Web. User corpora can be shared with other users. see also Create a corpus by uploading files Create a corpus from the web
  • vertical file

    A vertical file is a text file where each token (or word) is on a separate line. This format is typically used for text corpora and may contain additional metainformation (annotation). The first column contains tokens and structures, the other columns may contain part of speech, lemmas or other positional attributes. An example of a vertical file:
    <p>	
    <s>	
    Text		NN	text-n
    corpora		NN	corpus-n
    are		VBP	be-v
    comprised	VVN	comprise-v
    of		IN	of-i
    
    column 1: tokens and structures column 2: part of speech tags column 3: lempos attribute
  • web mining [ text-analysis ]

    web mining is the application of data mining which extracts information from texts. The web mining is focused on gaining information and metadata from the web. For this task, Sketch Engine uses the fully-automated tool WebBootCaT for creating corpora from the web which stores also metadata of processed websites. Read about other text analysis tools.
  • word form [ attribute ]

    [av_button label='Learn to understand attributes' link='post,24078' link_target='' size='small' position='right' label_display='' title_attr='' icon_select='yes' icon='ue81e' font='entypo-fontello' color='theme-color' custom_bg='#444444' custom_font='#ffffff' av_uid='' id='' custom_class='' admin_preview_bg=''] word is a positional attribute. It is short for word form and refers to one of the word forms that a  lemma can take, e.g. the lemma go can take these word forms go, went, gone, goes, going. A list of words is a list where each of go, went, gone, goes, going is listed separately and their frequencies are also calculated separately. A search using words is a search which will only find exactly what was typed. The other word forms will not be included. Word is case sensitiveapple and Apple are two different word forms. compare word_lc (lowercase) lemma lemma lc (lowercase) list of attributes  
  • word list

    A word list is a generic name for various types of lists such as list of words, lemmas, POS tags or other attributes with their frequency (hit counts, document counts or others).
  • word sketch

    A word sketch is a tool to display collocations (=word combinations) in a compact, easy-to-understand way. The word sketch makes it easy to understand how a word behaves, which contexts it typically appears in and which words it can be used together. more»
  • Word Sketch grammar

    Word Sketch grammar (WSG) is a set of rules defining the grammatical relations (=columns/categories) in a Word Sketch. WSG is language dependent, the same WSG cannot be shared across languages. Different corpora in the same language can use the same or different WSG. Users can write their own WSG to match their specific need. Corpora in unsupported languages can make use of a universal WSG which provides only basic statistics of words surrounding the keywords ignoring the grammar of the language. The universal WSG can also be modified by the user. more»