PGRdup is an R package to facilitate the search for probable/possible duplicate accessions in Plant Genetic Resources (PGR) collections using passport databases. Primarily this package implements a workflow (Fig. 1) designed to fetch groups or sets of germplasm accessions with similar passport data particularly in fields associated with accession names within or across PGR passport databases. It offers a suite of functions for data pre-processing, creation of a searchable Key Word in Context (KWIC) index of keywords associated with accession records and the identification of probable duplicate sets by fuzzy, phonetic and semantic matching of keywords. It also has functions to enable the user to review, modify and validate the probable duplicate sets retrieved.
The goal of this document is to introduce the users to these functions and familiarise them with the workflow intended to fetch probable duplicate sets. This document assumes a basic knowledge of R programming language.
Uninstalled dependencies (packages which PGRdup depends on viz- data.table, igraph, stringdist and stringi are also installed because of the argument dependencies=TRUE.
The package is essentially designed to operate on PGR passport data present in a data frame object, with each row holding one record and columns representing the attribute fields. For example, consider the dataset GN1000 supplied along with the package.
--------------------------------------------------------------------------------
Welcome to PGRdup version 0.2.3.4
# To know how to use this package type:
- browseVignettes(package = 'PGRdup')
+ browseVignettes(package = 'PGRdup')
for the package vignette.
# To know whats new in this version type:
- news(package='PGRdup')
+ news(package='PGRdup')
for the NEWS file.
# To cite the methods in the package type:
- citation(package='PGRdup')
+ citation(package='PGRdup')
# To suppress this message use:
suppressPackageStartupMessages(library(PGRdup))
--------------------------------------------------------------------------------
If the passport data exists as an excel sheet, it can be first converted to a comma-separated values (csv) file or tab delimited file and then easily imported into the R environment using the base functions read.csv and read.table respectively. Similarly read_csv() and read_tsv() from the readr package can also be used. Alternatively, the package readxl can be used to directly read the data from excel. In case of large csv files, the function fread in the data.table package can be used to rapidly load the data.
If the PGR passport data is in a database management system (DBMS), the required table can be imported as a data frame into R. using the appropriate R-database interface package. For example dbConnect for MySQL, ROracle for Oracle etc.
The PGR data downloaded from the genesys database as a Darwin Core - Germplasm zip archive can be imported into the R environment as a flat file data.frame using the read.genesys function.
Data pre-processing is a critical step which can affect the quality of the probable duplicate sets being retrieved. It involves data standardization as well as data cleaning which can be achieved using the functions DataClean, MergeKW, MergePrefix and MergeSuffix.
DataClean function can be used to clean the character strings in passport data fields(columns) specified as the input character vectorx according to the conditions specified in the arguments.
Commas, semicolons and colons which are sometimes used to separate multiple strings or names within the same field can be replaced with a single space using the logical arguments fix.comma, fix.semcol and fix.col respectively.
fix.space can be used to convert all space characters such as tab, newline, vertical tab, form feed and carriage return to spaces and finally convert multiple spaces to single space.
This function can hence be made use of in tidying up multiple forms of messy data existing in fields associated with accession names in PGR passport databases (Table 1).
Several common keyword string pairs or keyword prefixes and suffixes exist in fields associated with accession names in PGR passport databases. They can be merged using the functions MergeKW, MergePrefix and MergeSuffix respectively. The keyword string pairs, prefixes and suffixes can be supplied as a list or a vector to the argument y in these functions.
The function KWIC generates a Key Word in Context index (Knüpffer 1988; Knüpffer, Frese, and Jongen 1997) from the data frame of a PGR passport database based on the fields(columns) specified in the argument fields along with the keyword frequencies and gives the output as a list of class KWIC. The first element of the vector specified in fields is considered as the primary key or identifier which uniquely identifies all rows in the data frame.
This function fetches keywords from different fields specified, which can be subsequently used for matching to identify probable duplicates. The frequencies of the keywords retrieved can help in determining if further data pre-processing is required and also to decide whether any common keywords can be exempted from matching (Fig. 2).
Error in KWIC(GN, GNfields, min.freq = 1) :
Primary key/ID field should be unique and not NULL
Use PGRdup::ValidatePrimKey() to identify and rectify the aberrant records first
The erroneous records can be identified using the helper function ValidatePrimKey.
$message1
-[1] "ERROR: Duplicated records found in prim.key field"
+[1] "ERROR: Duplicated records found in prim.key field"
$Duplicates
CommonName BotanicalName NationalID CollNo DonorID
@@ -616,7 +571,7 @@
Generation of KWIC Index
1005 COMET Landrace United States of America 2004
$message2
-[1] "ERROR: NULL records found in prim.key field"
+[1] "ERROR: NULL records found in prim.key field"
$NullRecords
CommonName BotanicalName NationalID CollNo DonorID
@@ -628,66 +583,69 @@
$message1
-[1] "OK: No duplicated records found in prim.key field"
+[1] "OK: No duplicated records found in prim.key field"
$Duplicates
NULL
$message2
-[1] "OK: No NULL records found in prim.key field"
+[1] "OK: No NULL records found in prim.key field"
$NullRecords
NULL
-
Retrieval of Probable Duplicate Sets
+
+Retrieval of Probable Duplicate Sets
Once KWIC indexes are generated, probable duplicates of germplasm accessions can be identified by fuzzy, phonetic and semantic matching of the associated keywords using the function ProbDup. The sets are retrieved as a list of data frames of class ProbDup.
Keywords that are not to be used for matching can be specified as a vector in the excep argument.
-
Methods
+
+Methods
The function can execute matching according to either one of the following three methods as specified by the method argument.
-
Method "a" : Performs string matching of keywords in a single KWIC index to identify probable duplicates of accessions in a single PGR passport database.
+
+Method "a" : Performs string matching of keywords in a single KWIC index to identify probable duplicates of accessions in a single PGR passport database.
PhoneticDuplicates 99 260
Total 99 260(Distinct:260)
-
Method "b" : Performs string matching of keywords in the first KWIC index (query) with that of the keywords in the second index (source) to identify probable duplicates of accessions of the first PGR passport database among the accessions in the second database.
-
Method "c" : Performs string matching of keywords in two different KWIC indexes jointly to identify probable duplicates of accessions from among two PGR passport databases.
+
Method "b" : Performs string matching of keywords in the first KWIC index (query) with that of the keywords in the second index (source) to identify probable duplicates of accessions of the first PGR passport database among the accessions in the second database.
+
Method "c" : Performs string matching of keywords in two different KWIC indexes jointly to identify probable duplicates of accessions from among two PGR passport databases.
Fuzzy matching or approximate string matching of keywords is carried out by computing the generalized levenshtein (edit) distance between them. This distance measure counts the number of deletions, insertions and substitutions necessary to turn one string to another.
+
+Fuzzy matching or approximate string matching of keywords is carried out by computing the generalized levenshtein (edit) distance between them. This distance measure counts the number of deletions, insertions and substitutions necessary to turn one string to another.
Exact matching can be enforced with the argument force.exact set as TRUE. It can be used to avoid fuzzy matching when the number of alphabet characters in keywords is lesser than a critical value (max.alpha). Similarly, the value of max.digit can also be set according to the requirements to enforce exact matching. The default value of Inf avoids fuzzy matching and enforces exact matching for all keywords having any numerical characters. If max.digit and max.alpha are both set to Inf, exact matching will be enforced for all the keywords.
When exact matching is enforced, for keywords having both alphabet and numeric characters and with the number of alphabet characters greater than max.digit, matching will be carried out separately for alphabet and numeric characters present.
FuzzyDuplicates 378 745
Total 378 745(Distinct:745)
-
Phonetic matching of keywords is carried out using the Double Metaphone phonetic algorithm which is implemented as the helper function DoubleMetaphone, (Philips 2000), to identify keywords that have the similar pronunciation.
+
+Phonetic matching of keywords is carried out using the Double Metaphone phonetic algorithm which is implemented as the helper function DoubleMetaphone, (Philips 2000), to identify keywords that have the similar pronunciation.
PhoneticDuplicates 59 156
Total 59 156(Distinct:156)
-
Semantic matching matches keywords based on a list of accession name synonyms supplied as list with character vectors of synonym sets (synsets) to the syn argument. Synonyms in this context refer to interchangeable identifiers or names by which an accession is recognized. Multiple keywords specified as members of the same synset in syn are matched. To facilitate accurate identification of synonyms from the KWIC index, identical data standardization operations using the Merge* and DataClean functions for both the original database fields and the synset list are recommended.
+
+Semantic matching matches keywords based on a list of accession name synonyms supplied as list with character vectors of synonym sets (synsets) to the syn argument. Synonyms in this context refer to interchangeable identifiers or names by which an accession is recognized. Multiple keywords specified as members of the same synset in syn are matched. To facilitate accurate identification of synonyms from the KWIC index, identical data standardization operations using the Merge* and DataClean functions for both the original database fields and the synset list are recommended.
As the number of keywords in the KWIC indexes increases, the memory consumption by the function also increases proportionally. This is due to the reason that for string matching, this function relies upon creation of a n\(\times\)m matrix of all possible keyword pairs for comparison, where n and m are the number of keywords in the query and source indexes respectively. This can lead to cannot allocate vector of size... errors in case of large KWIC indexes where the comparison matrix is too large to reside in memory. In such a case, the chunksize argument can be reduced from the default 1000 to get the appropriate size of the KWIC index keyword block to be used for searching for matches at a time. However a smaller chunksize may lead to longer computation time due to the memory-time trade-off.
The progress of matching is displayed in the console as number of keyword blocks completed out of the total number of blocks, the percentage of achievement and a text-based progress bar.
-
In case of multi-byte characters in keywords, the speed of keyword matching is further dependent upon the useBytes argument as described in help("stringdist-encoding") for the stringdist function in the namesake package(van der Loo 2014), which is made use of here for string matching.
+
In case of multi-byte characters in keywords, the speed of keyword matching is further dependent upon the useBytes argument as described in help("stringdist-encoding") for the stringdist function in the namesake package(van der Loo 2014), which is made use of here for string matching.
The CPU time taken for retrieval of probable duplicate sets under different options for the arguments chunksize and useBytes can be visualized using the microbenchmark package (Fig. 3).
Fig. 3. CPU time with different ProbDup arguments estimated using the microbenchmark package.
-
Set Review, Modification and Validation
+
+Set Review, Modification and Validation
The initially retrieved sets may be intersecting with each other because there might be accessions which occur in more than duplicate set. Disjoint sets can be generated by merging such overlapping sets using the function DisProbDup.
Disjoint sets are retrieved either individually for each type of probable duplicate sets or considering all type of sets simultaneously. In case of the latter, the disjoint of all the type of sets alone are returned in the output as an additional data frame DisjointDupicates in an object of class ProbDup.
Once duplicate sets are retrieved they can be validated by manual clerical review by comparing with original PGR passport database(s) using the ReviewProbDup function. This function helps to retrieve PGR passport information associated with fuzzy, phonetic or semantic probable duplicate sets in an object of class ProbDup from the original databases(s) from which they were identified. The original information of accessions comprising a set, which have not been subjected to data standardization can be compared under manual clerical review for the validation of the set. By default only the fields(columns) which were used initially for creation of the KWIC indexes using the KWIC function are retrieved. Additional fields(columns) if necessary can be specified using the extra.db1 and extra.db2 arguments.
When any primary ID/key records in the fuzzy, phonetic or semantic duplicate sets are found to be missing from the original databases specified in db1 and db2, then they are ignored and only the matching records are considered for retrieving the information with a warning.
This may be due to data standardization of the primary ID/key field using the function DataClean before creation of the KWIC index and subsequent identification of probable duplicate sets. In such a case, it is recommended to use an identical data standardization operation on the primary ID/key field of databases specified in db1 and db2 before running this function.
-
With R <= v3.0.2, due to copying of named objects by list(), Invalid .internal.selfref detected and fixed... warning can appear, which may be safely ignored.
+
With R <= v3.0.2, due to copying of named objects by list(), Invalid .internal.selfref detected and fixed... warning can appear, which may be safely ignored.
The output data frame can be subjected to clerical review either after exporting into an external spreadsheet using write.csv function or by using the edit function.
The column DEL can be used to indicate whether a record has to be deleted from a set or not. Y indicates “Yes”, and the default N indicates “No”.
The column SPLIT similarly can be used to indicate whether a record in a set has to be branched into a new set. A set of identical integers in this column other than the default 0 can be used to indicate that they are to be removed and assembled into a new set.
SET_NO TYPE K[a] PRIM_ID IDKW DEL SPLIT COUNT
1 1 F [K1] EC100277 [K1]EC100277:U44712 N 0 3
2 1 F [K1] EC21118 [K1]EC21118:U44712 N 0 3
@@ -1304,15 +1268,15 @@
Set Review, Modification and Validation
4 <NA> <NA> NA
5 STARR United States of America 2004
6 United States of America 2001
After clerical review, the data frame created using the function ReviewProbDup from an object of class ProbDup can be reconstituted back to the same object after the review using the function ReconstructProbDup.
The instructions for modifying the sets entered in the appropriate format in the columns DEL and SPLIT during clerical review are taken into account for reconstituting the probable duplicate sets. Any records with Y in column DEL are deleted and records with identical integers in the column SPLIT other than the default 0 are reassembled into a new set.
The ProbDup object is a list of data frames of different kinds of probable duplicate sets viz- FuzzyDuplicates, PhoneticDuplicates, SemanticDuplicates and DisjointDuplicates. Each row of the component data frame will have information of a set, the type of set, the set members as well as the keywords based on which the set was formed. This data can be reshaped into long form using the function ParseProbDup. This function which will transform a ProbDup object into a single data frame.
SET_NO TYPE K PRIM_ID IDKW COUNT
1 1 F [K1] EC100277 [K1]EC100277:U44712 3
2 1 F [K1] EC21118 [K1]EC21118:U44712 3
@@ -1373,57 +1338,57 @@
Other Functions
4 NA <NA> <NA> <NA> NA
5 2 F [K1] EC100280 [K1]EC100280:NC5 3
6 2 F [K1] EC100721 [K1]EC100721:NC5 3
-
The prefix K* here indicates the KWIC index of origin. This is useful in ascertaining the database of origin of the accessions when method "b" or "c" was used to create the input ProbDup object.
+
The prefix K* here indicates the KWIC index of origin. This is useful in ascertaining the database of origin of the accessions when method "b" or "c" was used to create the input ProbDup object.
Once the sets are reviewed and modified, the validated set data fields from the ProbDup object can be added to the original PGR passport database using the function AddProbDup. The associated data fields such as SET_NO, ID and IDKW are added based on the PRIM_ID field(column).
In case more than one KWIC index was used to generate the object of class ProbDup, the argument addto can be used to specify to which database the data fields are to be added. The default "I" indicates the database from which the first KWIC index was created and "II" indicates the database from which the second index was created.
In case more than one KWIC index was used to generate the object of class ProbDup, the argument addto can be used to specify to which database the data fields are to be added. The default "I" indicates the database from which the first KWIC index was created and "II" indicates the database from which the second index was created.
The function SplitProbDup can be used to split an object of class ProbDup into two on the basis of set counts. This is useful for reviewing separately the sets with larger set counts.
SemanticDuplicates 2 5
Total 479 1010(Distinct:762)
The summary of accessions according to a grouping factor field(column) in the original database(s) within the probable duplicate sets retrieved in a ProbDup object can be visualized by the ViewProbDup function. The resulting plot can be used to examine the extent of probable duplication within and between groups of accessions records.
Fig. 5. Summary visualization of groundnut probable duplicate sets retrieved according to SourceCountry field.
The function KWCounts can be used to compute the keyword counts from PGR passport database fields(columns) which are considered for identification of probable duplicates. These keyword counts can give a rough indication of the completeness of the data in such fields (Fig. 3).
Fig. 6. The keyword counts in the database fields considered for identification of probable duplicates for A. the entire GN1000 dataset, B. the probable duplicate records alone and C. the unique records alone.
-To cite the R package 'PGRdup' in publications use:
+To cite the R package 'PGRdup' in publications use:
Aravind, J., Radhamani, J., Kalyani Srinivasan, Ananda Subhash,
- B., and Tyagi, R. K. (2018). PGRdup: Discover Probable
+ B., and Tyagi, R. K. (2019). PGRdup: Discover Probable
Duplicates in Plant Genetic Resources Collections. R package
version 0.2.3.4,
https://github.com/aravind-j/PGRdup,https://cran.r-project.org/package=PGRdup.
@@ -1618,7 +1585,7 @@
Citing PGRdup
@Manual{,
title = {PGRdup: Discover Probable Duplicates in Plant Genetic Resources Collections},
author = {J. Aravind and J. Radhamani and {Kalyani Srinivasan} and B. {Ananda Subhash} and Rishi Kumar Tyagi},
- year = {2018},
+ year = {2019},
note = {R package version 0.2.3.4},
note = {https://github.com/aravind-j/PGRdup,},
note = {https://cran.r-project.org/package=PGRdup},
@@ -1629,11 +1596,12 @@
Knüpffer, H., L. Frese, and M. W. M. Jongen. 1997. “Using Central Crop Databases: Searching for Duplicates and Gaps.” In Central Crop Databases: Tools for Plant Genetic Resources Management. Report of a Workshop, Budapest, Hungary, 13-16 October 1996, edited by E. Lipman, M. W. M. Jongen, T. J. L. van Hintum, T. Gass, and L. Maggioni, 67–77. Rome, Italy and Wageningen, The Netherlands: International Plant Genetic Resources Institute and Centre for Genetic Resources. https://www.bioversityinternational.org/index.php?id=244&tx_news_pi1%5Bnews%5D=334&cHash=3738ae238a450ff71bb1cb087687ac9c.
ProbDup identifies probable duplicates of germplasm accessions in KWIC
indexes created from PGR passport databases using fuzzy, phonetic and
semantic matching strategies.
A list with character vectors of synsets (see Details).
-
+
Value
A list of class ProbDup containing the following data frames
@@ -272,13 +278,11 @@
Value
ID:KW
The 'matching' keywords along with the IDs.
COUNT
The
number of elements in a set.
-
SET_NO
The prefix [K*] indicates the KWIC index of origin of the KEYWORD or
PRIM_ID.
-
Details
This function performs fuzzy, phonetic and semantic matching of keywords in
@@ -295,8 +299,10 @@
Details
first PGR passport database among the accessions in the second database.
Method c:
Perform string matching of keywords in two different
KWIC indexes jointly to identify probable duplicates of accessions from among
-two PGR passport databases.
-
Fuzzy matching or approximate string matching of keywords is carried
+two PGR passport databases.
+
+
+
Fuzzy matching or approximate string matching of keywords is carried
out by computing the generalized levenshtein (edit) distance between them.
This distance measure counts the number of deletions, insertions and
substitutions necessary to turn one string to the another. A distance of up
@@ -337,8 +343,7 @@
Details
information associated with the identified sets in an object of class
ProbDup as fields(columns) to the original PGR passport database.
All of the string matching operations here are executed through the
-stringdist-package functions.
As the number of keywords in the KWIC indexes increases, the memory
@@ -357,24 +362,22 @@
Note
30%) and a text-based progress bar.
In case of multi-byte characters in keywords, the matching speed is further
dependent upon the useBytes argument as described in
- Encoding issues for the stringdist
+ Encoding issues for the stringdist
function, which is made use of here for string matching.
ReconstructProbDup reconstructs a data frame of probable duplicate
sets created using the function ReviewProbDup and subjected to manual
clerical review, back into an object of class ProbDup.
-
ReconstructProbDup(rev)
-
+
Arguments
@@ -166,13 +172,12 @@
Arg
COUNT and IDKW
-
+
Value
An object of class ProbDup with the modified fuzzy,
phonetic and semantic probable duplicate sets according to the instructions
specified under clerical review.
-
Details
A data frame created using the function ReviewProbDup
@@ -185,37 +190,36 @@
Details
Any records with Y in column DEL are deleted and records with
identical integers in the column SPLIT other than the default 0
are reassembled into a new set.
Retrieve probable duplicate set information from PGR passport database for
-
ReviewProbDup retrieves information associated with the probable
duplicate sets from the original PGR passport database(s) from which they
were identified in order to facilitate manual clerical review.
A data frame of the long/narrow form of the probable duplicate sets
@@ -218,12 +224,10 @@
Value
SPLIT
Column to indicate whether record has to be branched and assembled into new
set.
COUNT
The number of elements in a set.
-
SET_NO
For the
retrieved columns(fields) the prefix K* indicates the KWIC index of
origin.
-
Details
This function helps to retrieve PGR passport information associated with
@@ -237,8 +241,8 @@
Details
Additional fields(columns) if necessary can be specified using the
extra.db1 and extra.db2 arguments.
The output data frame can be subjected to clerical review either after
-exporting into an external spreadsheet using write.csv
-function or by using the edit function.
+exporting into an external spreadsheet using write.csv
+function or by using the edit function.
The column DEL can be used to indicate whether a record has to be
deleted from a set or not. Y indicates "Yes", and the default N
indicates "No".
@@ -246,7 +250,6 @@
Details
a set has to be branched into a new set. A set of identical integers in this
column other than the default 0 can be used to indicate that they are
to be removed and assembled into a new set.
-
Note
When any primary ID/key records in the fuzzy, phonetic or semantic
@@ -258,41 +261,40 @@
Note
index and subsequent identification of probable duplicate sets. In such a
case, it is recommended to use an identical data standardization operation
on the databases db1 and db2 before running this function.
-
With R <= v3.0.2, due to copying of named objects by list(),
+
With R <= v3.0.2, due to copying of named objects by list(),
Invalid .internal.selfref detected and fixed... warning can appear,
which may be safely ignored.
Examp
"U", "VALENCIA", "VIRGINIA", "WHITE")
# Specify the synsets as a list
-syn<-list(c("CHANDRA", "AH114"), c("TG1", "VIKRAM"))
+syn<-list(c("CHANDRA", "AH114"), c("TG1", "VIKRAM"))
# Fetch probable duplicate setsGNdup<-ProbDup(kwic1=GNKWIC, method="a", excep=exep, fuzzy=TRUE,
@@ -312,49 +314,45 @@
Examp
# Get the data frame for reviewing the duplicate sets identifiedRevGNdup<-ReviewProbDup(pdup=disGNdup, db1=GN1000,
- extra.db1=c("SourceCountry", "TransferYear"),
+ extra.db1=c("SourceCountry", "TransferYear"),
max.count=30, insert.blanks=TRUE)
# Examine and review the duplicate sets using edit function
-RevGNdup<-edit(RevGNdup)
+RevGNdup<-edit(RevGNdup)
# OR examine and review the duplicate sets after exporting them as a csv file
-write.csv(file="Duplicate sets for review.csv", x=RevGNdup)
+write.csv(file="Duplicate sets for review.csv", x=RevGNdup)
-# }
+}
Arg
Fuzzy, Phonetic and Semantic duplicate sets in pdup are to be split.
-
+
Value
A list with the the divided objects of class ProbDup
(pdup1 and pdup2) along with the corresponding lists of
accessions present in each (list1 and list2).
Validate if a data frame column confirms to primary key/ID constraints
-
ValidatePrimKey checks if a column in a data frame confirms to the
primary key/ID constraints of absence of duplicates and NULL values. Aberrant
records if encountered are returned in the output list.
-
ValidatePrimKey(x, prim.key)
-
+
Arguments
@@ -170,7 +176,7 @@
Arg
Details).
-
+
Value
A list with containing the following components:
@@ -183,11 +189,9 @@
Value
x or not.
NullRecords
A data frame of the records
with NULL prim.key values if they were encountered.
-
message1
-
Details
The function checks whether a field(column) in a data frame of PGR passport
@@ -203,11 +207,9 @@
Details
It is recommended to run this function and rectify aberrant records in a PGR
passport database before creating a KWIC index using the
KWIC function.
if (FALSE) {
# Show error in case of duplicates and NULL values # in the primary key/ID field "NationalID"GN[1001:1005,] <-GN[1:5,]
GN[1001,3] <-""
-ValidatePrimKey(x=GN, prim.key="NationalID")
-# }
+ValidatePrimKey(x=GN, prim.key="NationalID")}
Visualize the probable duplicate sets retrieved in a ProbDup ob
-
ViewProbDup plots summary visualizations of accessions within the
probable duplicate sets retrieved in a ProbDup object according to a
grouping factor field(column) in the original database(s).
A grid graphical object (Grob)
of the summary visualization plot. Can be plotted using the grid.arrange function
-
Summary1
-
Note
When any primary ID/key records in the fuzzy, phonetic or semantic
@@ -247,63 +251,62 @@
Note
"type" will order according to the kind of sets, "sets" will
order according to the number of sets in each kind and "acc" will
order according to the number of accessions in each kind.
-
The individual plots are made using ggplot and then
- grouped together using gridExtra-package.
-
+
The individual plots are made using ggplot and then
+ grouped together using gridExtra-package.
print.KWIC prints to console the summary of an object of class
KWIC including the database fields(columns) used, the total number of
keywords and the number of distinct keywords in the index.
print.ProbDup prints to console the summary of an object of class
ProbDup including the method used ("a", "b" or "c"), the database
fields(columns) considered, the number of probable duplicate sets of each
kind along with the corresponding number of records.
if (FALSE) {
# Import the DwC-Germplasm zip archive "genesys-accessions-filtered.zip"PGRgenesys<-read.genesys("genesys-accessions-filtered.zip",
scrub.names.space=TRUE, readme=TRUE)
-# }
+}