diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 000000000..5e21d85e2 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,5 @@ +* text = auto +*.java text diff=java +*.png binary +*.jpg binary +*.xlsx binary diff --git a/README.md b/README.md index 7fe9747bf..079ac562b 100644 --- a/README.md +++ b/README.md @@ -31,29 +31,29 @@ See the [docs folder](doc/index.md) for additional information on implementation This section briefly lists the main technologies and principles used (or planned to be used) in the application. - Spring Boot 3, Spring Framework 6, Spring Security, Spring Data (paging, filtering) -- Jackson 2.13 +- Jackson Databind - [JB4JSON-LD](https://github.com/kbss-cvut/jb4jsonld-jackson) - Java - JSON-LD (de)serialization library - [JOPA](https://github.com/kbss-cvut/jopa) - persistence library for the Semantic Web -- JUnit 5 (RT used 4), Mockito 4 (RT used 1), Hamcrest 2 (RT used 1) -- Servlet API 4 (RT used 3.0.1) -- JSON Web Tokens (CSRF protection not necessary for JWT) +- JUnit 5, Mockito 4, Hamcrest 2 +- Jakarta Servlet API 4 +- JSON Web Tokens - SLF4J + Logback - CORS (for separate frontend) - Java bean validation (JSR 380) -## Ontology +## Ontologies -The ontology on which TermIt is based can be found in the `ontology` folder. For proper inference -functionality, `termit-model.ttl`, the -_popis-dat_ ontology model (http://onto.fel.cvut.cz/ontologies/slovnik/agendovy/popis-dat/model) and the SKOS vocabulary -model -(http://www.w3.org/TR/skos-reference/skos.rdf) need to be loaded into the repository used by TermIt (see `doc/setup.md`) -for details. +The ontology on which TermIt is based can be found in the `ontology` folder. It extends the +_popis-dat_ ontology (http://onto.fel.cvut.cz/ontologies/slovnik/agendovy/popis-dat). TermIt vocabularies and terms +use the SKOS vocabulary (http://www.w3.org/TR/skos-reference/skos.rdf). + +Relevant ontologies need to be loaded into the repository for proper inference functionality. See [setup.md](doc/setup.md) +for more details. ## Monitoring -We use [JavaMelody](https://github.com/javamelody/javamelody) for monitoring the application and its usage. The data are +[JavaMelody](https://github.com/javamelody/javamelody) can be used for monitoring the application and its usage. The data are available on the `/monitoring` endpoint and are secured using _basic_ authentication. Credentials are configured using the `javamelody.init-parameters.authorized-users` parameter in `application.yml` (see diff --git a/doc/implementation.md b/doc/implementation.md index 3d4005e5e..8518c25ec 100644 --- a/doc/implementation.md +++ b/doc/implementation.md @@ -43,23 +43,19 @@ follows: Fulltext search currently supports multiple types of implementation: * Simple substring matching on term and vocabulary label _(default)_ -* RDF4J with Lucene SAIL * GraphDB with Lucene connector Each implementation has its own search query which is loaded and used by `SearchDao`. In order for the more advanced -implementations for Lucene to work, a corresponding Maven profile (**graphdb**, **rdf4j**) has to be selected. This +implementation for Lucene to work, a corresponding Maven profile (**graphdb**) has to be selected. This inserts the correct query into the resulting artifact during build. If none of the profiles is selected, the default search is used. Note that in case of GraphDB, corresponding Lucene connectors (`label_index` for labels and `defcom_index` for -definitions and comments) -have to be created as well. +definitions and comments) have to be created as well. ### RDFS Inference in Tests -The test in-memory repository is configured to be a SPIN SAIL with RDFS inferencing engine. Thus, basically all the -inference features available in production are available in tests as well. However, the repository is by default left -empty (without the model or SPIN rules) to facilitate test performance (inference in RDF4J is really slow). To load the +The test in-memory repository is configured to be a RDF4J SAIL with RDFS inferencing engine. The repository is by default left +empty (without the model) to facilitate test performance (inference in RDF4J is really slow). To load the TermIt model into the repository and thus enable RDFS inference, call the `enableRdfsInference` -method available on both `BaseDaoTestRunner` and `BaseServiceTestRunner`. SPIN rules are currently not loaded as they -don't seem to be used by any tests. +method available on both `BaseDaoTestRunner` and `BaseServiceTestRunner`. diff --git a/doc/setup.md b/doc/setup.md index 81dad0f81..839fe0b3c 100644 --- a/doc/setup.md +++ b/doc/setup.md @@ -6,7 +6,7 @@ This guide provides information on how to build and deploy TermIt. ### System Requirements -* JDK 11 or newer (tested up to JDK 11 LTS) +* JDK 17 or newer * Apache Maven 3.5.x or newer @@ -16,13 +16,11 @@ This guide provides information on how to build and deploy TermIt. To build TermIt for **non**-development deployment, use Maven and select the `production` profile. -In addition, full text search in TermIt supports three modes: +In addition, full text search in TermIt supports two modes: 1. Default label-based substring matching -2. RDF4J repository with Lucene index -3. GraphDB repository with Lucene index +2. GraphDB repository with Lucene indexes -Options 2. and 3. have their respective Maven profiles - `rdf4j` and `graphdb`. Select one of them -or let the system use the default one. +Option 2. has its respective Maven profile - `graphdb`. Moreover, TermIt can be packaged either as an executable JAR (using Spring Boot) or as a WAR that can be deployed in any Servlet API 4-compatible application server. Maven profiles `standalone` (active by default) and `war` can be used to activate them respectively. @@ -40,9 +38,9 @@ There is one parameter not used by the application itself, but by Spring - `spri by the application: * `lucene` - decides whether Lucene text indexing is enabled and should be used in full text search queries. * `admin-registration-only` - decides whether new users can be registered only by application admin, or whether anyone can register. -* `no-cache` - disables EhCache which is used to cache lists of resources and vocabularies for faster retrieval. +* `no-cache` - disables Ehcache, which is used to cache lists of resources and vocabularies for faster retrieval, and persistence cache. -The `lucene` Spring profile is activated automatically by the `rdf4j` and `graphdb` Maven profiles. `admin-registration-only` and `no-cache` have to be added +The `lucene` Spring profile is activated automatically by the `graphdb` Maven. `admin-registration-only` and `no-cache` have to be added either in `application.yml` directly, or one can pass the parameter to Maven build, e.g.: * `mvn clean package -P graphdb "-Dspring.profiles.active=lucene,admin-registration-only"` @@ -51,7 +49,7 @@ either in `application.yml` directly, or one can pass the parameter to Maven bui #### Example * `mvn clean package -B -P production,graphdb "-Ddeployment=DEV"` -* `clean package -B -P production,rdf4j,war "-Ddeployment=STAGE"` +* `clean package -B -P production,graphdb,war "-Ddeployment=STAGE"` The `deployment` parameter is used to parameterize log messages and JMX beans and is important in case multiple deployments of TermIt are running in the same Tomcat. @@ -74,20 +72,17 @@ or configure it permanently by setting the `MAVEN_OPTS` variable in System Setti ### System Requirements -* JDK 11 or later (tested with JDK 11) -* (WAR) Apache Tomcat 8.5 or 9.x (recommended) or any Servlet API 4-compatible application server +* JDK 17 or later +* (WAR) Apache Tomcat 10 or any Jakarta Servlet API 4-compatible application server * _For deployment of a WAR build artifact._ - * Do not use Apache Tomcat 10.x, it is based on the new Jakarta EE and TermIt would not work on it due to package namespace issues (`javax` -> `jakarta`) + * Do not use Apache Tomcat 9.x or older, it is based on the old Java EE and TermIt would not work on it due to package namespace issues (`javax` -> `jakarta`) ### Setup Application deployment is simple - just deploy the WAR file (in case of the `war` Maven build profile) to an application server or run the JAR file (in case of the `standalone` Maven build profile). -What is important is the correct setup of the repository. We will describe two options: - -1. GraphDB -2. RDF4J +What is important is the correct setup of the repository. #### GraphDB @@ -99,16 +94,16 @@ In order to support inference used by the application, a custom ruleset has to b 4. Create the following Lucene connectors in GraphDB: * *Label index* * name: **label_index** - * Field name: **label**, **title** - * Property chain: **http://www.w3.org/2000/01/rdf-schema#label**, **http://purl.org/dc/terms/title** + * Field names: **prefLabel**, **altLabel**, **hiddenLabel**, **title** + * Property chains: **http://www.w3.org/2004/02/skos/core#prefLabel**, http://www.w3.org/2004/02/skos/core#altLabel**, **http://www.w3.org/2004/02/skos/core#hiddenLabel**, **http://purl.org/dc/terms/title** * Languages: _Leave empty (for indexing all languages) or specify the language tag - see below_ * Types: **http://www.w3.org/2004/02/skos/core#Concept**, **http://onto.fel.cvut.cz/ontologies/slovník/agendový/popis-dat/pojem/slovník** * Analyzer: Analyzer appropriate for the system language, e.g. **org.apache.lucene.analysis.cz.CzechAnalyzer** * *Definition and comment index* * name: **defcom_index** - * Field name: **definition**, **comment**, **description** + * Field name: **definition**, **scopeNote**, **description** * Languages: _Leave empty (for indexing all languages) or specify the language tag - see below_ - * Property chain: **http://www.w3.org/2004/02/skos/core#definition**, **http://www.w3.org/2000/01/rdf-schema#comment**, **http://purl.org/dc/terms/description** + * Property chain: **http://www.w3.org/2004/02/skos/core#definition**, **http://www.w3.org/2004/02/skos/core#scopeNote**, **http://purl.org/dc/terms/description** * Types and Analyzer as above Language can be set for each connector. This is useful in case the data contain labels, definitions, and comments in multiple languages. In this case, @@ -117,34 +112,13 @@ there is a term with label `území`@cs and `area`@en. Now, if no language is sp look as follows: `území area`, which may not be desired. If the connector language is set to `cs`, the result snippet will contain only `území`. See the [documentation](http://graphdb.ontotext.com/documentation/free/lucene-graphdb-connector.html) for more details. -#### RDF4J - -In order to support the inference used by the application, new rules need to be added to RDF4J because its own RDFS rule engine does not -support OWL stuff like inverse properties (which are used in the model). - -For RDF4J 2.x: -1. Start by creating an RDF4J repository of type **RDFS+SPIN with Lucene support** -2. Upload SPIN rules from `rulesets/rules-termit-spin.ttl` into the repository -3. There is no need to configure Lucene connectors, it by default indexes all properties in RDF4J (alternatively, it is possible -to upload a repository configuration directly into the system repository - see examples at [[1]](https://github.com/eclipse/rdf4j/tree/master/core/repository/api/src/main/resources/org/eclipse/rdf4j/repository/config) -4. ----- - -For RDF4J 3.x: -1. Start by creating an RDF4J repository with RDFS and SPIN inference and Lucene support - * Copy repository configuration into the appropriate directory, as described at [[2]](https://rdf4j.eclipse.org/documentation/server-workbench-console/#repository-configuration) - * Native store with RDFS+SPIN and Lucene sample configuration is at [[3]](https://github.com/eclipse/rdf4j/blob/master/core/repository/api/src/main/resources/org/eclipse/rdf4j/repository/config/native-spin-rdfs-lucene.ttl) -2. Upload SPIN rules from `rulesets/rules-termit-spin.ttl` into the repository -3. There is no need to configure Lucene connectors, it by default indexes all properties in RDF4J -4. ----- - -#### Common - TermIt needs the repository to provide some inference. Beside loading the appropriate rulesets (see above), it is also necessary to load the ontological models into the repository. 5. Upload the following RDF files into the newly created repository: * `ontology/termit-glosář.ttl` * `ontology/termit-model.ttl` + * `ontology/sioc-ns.rdf` * `http://onto.fel.cvut.cz/ontologies/slovník/agendový/popis-dat/model` * `http://onto.fel.cvut.cz/ontologies/slovník/agendový/popis-dat/glosář` * `https://www.w3.org/TR/skos-reference/skos.rdf` @@ -203,4 +177,4 @@ TERMIT_SECURITY_PROVIDER=oidc TermIt will automatically configure its security accordingly (it is using Spring's [`ConditionalOnProperty`](https://www.baeldung.com/spring-conditionalonproperty)). -**Note that termit-ui needs to be configured for mathcing authentication mode.** +**Note that termit-ui needs to be configured for matching authentication mode.** diff --git "a/ontology/termit-glos\303\241\305\231.ttl" "b/ontology/termit-glos\303\241\305\231.ttl" index 0a36d37a4..f32e1e84b 100644 --- "a/ontology/termit-glos\303\241\305\231.ttl" +++ "b/ontology/termit-glos\303\241\305\231.ttl" @@ -640,3 +640,12 @@ termit-pojem:požadavek-na-změnu-hesla termit:glosář ; "Password reset request"@en , "Požadavek na změnu hesla"@cs . + +termit-pojem:má-adresu-modelovacího-nástroje + a ; + + , ; + + termit:glosář ; + + "Has modeling tool address"@en , "Má adresu modelovacího nástroje"@cs . diff --git a/ontology/termit-model.ttl b/ontology/termit-model.ttl index b5ef518cc..09444df7f 100644 --- a/ontology/termit-model.ttl +++ b/ontology/termit-model.ttl @@ -337,4 +337,8 @@ termit-pojem:koncový-stav-pojmu termit-pojem:požadavek-na-změnu-hesla a , owl:Class . +termit-pojem:má-adresu-modelovacího-nástroje + a owl:AnnotationProperty , ; + rdfs:subPropertyOf . + diff --git a/pom.xml b/pom.xml index a46eb45cc..6a644e94d 100644 --- a/pom.xml +++ b/pom.xml @@ -7,11 +7,11 @@ org.springframework.boot spring-boot-starter-parent - 3.3.3 + 3.3.4 termit - 3.2.0 + 3.3.0 TermIt Terminology manager based on Semantic Web technologies. ${packaging} @@ -28,10 +28,10 @@ 17 - 2.7.0 - 1.6.0 + 3.0.0 + 1.6.2 2.6.0 - 2.0.5 + 2.1.0 0.15.0 @@ -119,7 +119,7 @@ com.github.ledsoft jopa-spring-transaction - 0.3.0 + 0.3.1 @@ -249,14 +249,14 @@ org.jsoup jsoup - 1.15.4 + 1.18.1 com.vladsch.flexmark flexmark-all - 0.64.6 + 0.64.8 @@ -273,7 +273,7 @@ org.apache.poi poi-ooxml - 5.2.2 + 5.3.0 @@ -287,7 +287,7 @@ org.apache.velocity velocity-engine-core - 2.3 + 2.4 @@ -394,7 +394,7 @@ - + graphdb diff --git a/profile/graphdb/query/fulltextsearch.rq b/profile/graphdb/query/fulltextsearch.rq index f03cf04ee..b020dc7d5 100644 --- a/profile/graphdb/query/fulltextsearch.rq +++ b/profile/graphdb/query/fulltextsearch.rq @@ -7,8 +7,9 @@ PREFIX inst: PREFIX rdf: PREFIX rdfs: PREFIX dc: +PREFIX skos: -SELECT DISTINCT ?entity ?label ?vocabularyUri ?state ?type ?snippetField ?snippetText ?score { +SELECT DISTINCT ?entity ?label ?description ?vocabularyUri ?state ?type ?snippetField ?snippetText ?score { { ?search a inst:label_index . } @@ -17,12 +18,21 @@ SELECT DISTINCT ?entity ?label ?vocabularyUri ?state ?type ?snippetField ?snippe ?search a inst:defcom_index . } { - ?entity rdfs:label ?label . + ?entity skos:prefLabel ?label . + OPTIONAL { + ?entity skos:definition ?definition . + } + OPTIONAL { + ?entity skos:scopeNote ?scopeNote . + } } UNION { ?entity dc:title ?label . + OPTIONAL { + ?entity dc:description ?dcDescription . + } } ?search :query ?wildCardSearchString ; - :snippetSize 2000 ; + :snippetSize 250 ; :entities ?entity . ?entity a ?type ; :score ?initScore ; @@ -38,7 +48,9 @@ SELECT DISTINCT ?entity ?label ?vocabularyUri ?state ?type ?snippetField ?snippe FILTER (?type = ?term || ?type = ?vocabulary) FILTER NOT EXISTS { ?entity a ?snapshot . } FILTER (lang(?label) = ?langTag) - BIND(IF(lcase(str(?snippetText)) = lcase(str(?splitExactMatch)), ?initScore * 2, IF(CONTAINS(lcase(str(?snippetText)), ?searchString), IF(?snippetField = "label", ?initScore * 1.5, ?initScore), ?initScore)) as ?exactMatchScore) - BIND(IF(?snippetField = "label", ?exactMatchScore * 2, IF(?snippetField = "definition", ?exactMatchScore * 1.2, ?exactMatchScore)) as ?score) + BIND(COALESCE(?definition, COALESCE(?scopeNote, ?dcDescription)) AS ?description) + FILTER (!BOUND(?description) || lang(?description) = ?langTag) + BIND(IF(lcase(str(?snippetText)) = lcase(str(?splitExactMatch)), ?initScore * 2, IF(CONTAINS(lcase(str(?snippetText)), ?searchString), IF(?snippetField = "prefLabel", ?initScore * 1.5, ?initScore), ?initScore)) as ?exactMatchScore) + BIND(IF(?snippetField = "prefLabel", ?exactMatchScore * 2, IF(?snippetField = "definition", ?exactMatchScore * 1.2, ?exactMatchScore)) as ?score) } ORDER BY desc(?score) diff --git a/src/main/java/cz/cvut/kbss/termit/dto/ConfigurationDto.java b/src/main/java/cz/cvut/kbss/termit/dto/ConfigurationDto.java index 8b4591fcc..460ab58d6 100644 --- a/src/main/java/cz/cvut/kbss/termit/dto/ConfigurationDto.java +++ b/src/main/java/cz/cvut/kbss/termit/dto/ConfigurationDto.java @@ -53,6 +53,9 @@ public class ConfigurationDto implements Serializable { @OWLDataProperty(iri = Vocabulary.s_p_ma_oddelovac_verze) private String versionSeparator; + @OWLAnnotationProperty(iri = Vocabulary.s_p_ma_adresu_modelovaciho_nastroje) + private String modelingToolUrl; + public String getLanguage() { return language; } @@ -92,4 +95,12 @@ public String getVersionSeparator() { public void setVersionSeparator(String versionSeparator) { this.versionSeparator = versionSeparator; } + + public String getModelingToolUrl() { + return modelingToolUrl; + } + + public void setModelingToolUrl(String modelingToolUrl) { + this.modelingToolUrl = modelingToolUrl; + } } diff --git a/src/main/java/cz/cvut/kbss/termit/dto/readonly/ReadOnlyTerm.java b/src/main/java/cz/cvut/kbss/termit/dto/readonly/ReadOnlyTerm.java index e43283c94..a0152a0e7 100644 --- a/src/main/java/cz/cvut/kbss/termit/dto/readonly/ReadOnlyTerm.java +++ b/src/main/java/cz/cvut/kbss/termit/dto/readonly/ReadOnlyTerm.java @@ -44,7 +44,7 @@ public class ReadOnlyTerm extends AbstractTerm { @OWLAnnotationProperty(iri = SKOS.ALT_LABEL) private Set altLabels; - @OWLAnnotationProperty(iri = SKOS.PREF_LABEL) + @OWLAnnotationProperty(iri = SKOS.HIDDEN_LABEL) private Set hiddenLabels; @OWLAnnotationProperty(iri = SKOS.SCOPE_NOTE) diff --git a/src/main/java/cz/cvut/kbss/termit/dto/search/FacetedSearchResult.java b/src/main/java/cz/cvut/kbss/termit/dto/search/FacetedSearchResult.java index 0fd546ca3..276144d82 100644 --- a/src/main/java/cz/cvut/kbss/termit/dto/search/FacetedSearchResult.java +++ b/src/main/java/cz/cvut/kbss/termit/dto/search/FacetedSearchResult.java @@ -160,10 +160,9 @@ public boolean equals(Object o) { if (this == o) { return true; } - if (!(o instanceof FacetedSearchResult)) { + if (!(o instanceof FacetedSearchResult that)) { return false; } - FacetedSearchResult that = (FacetedSearchResult) o; return Objects.equals(getUri(), that.getUri()); } diff --git a/src/main/java/cz/cvut/kbss/termit/dto/search/FullTextSearchResult.java b/src/main/java/cz/cvut/kbss/termit/dto/search/FullTextSearchResult.java index c107aea5d..76184e013 100644 --- a/src/main/java/cz/cvut/kbss/termit/dto/search/FullTextSearchResult.java +++ b/src/main/java/cz/cvut/kbss/termit/dto/search/FullTextSearchResult.java @@ -26,6 +26,7 @@ import cz.cvut.kbss.jopa.model.annotations.SparqlResultSetMapping; import cz.cvut.kbss.jopa.model.annotations.Types; import cz.cvut.kbss.jopa.model.annotations.VariableResult; +import cz.cvut.kbss.jopa.vocabulary.DC; import cz.cvut.kbss.jopa.vocabulary.RDFS; import cz.cvut.kbss.termit.model.util.HasIdentifier; import cz.cvut.kbss.termit.model.util.HasTypes; @@ -41,6 +42,7 @@ variables = { @VariableResult(name = "entity", type = URI.class), @VariableResult(name = "label", type = String.class), + @VariableResult(name = "description", type = String.class), @VariableResult(name = "vocabularyUri", type = URI.class), @VariableResult(name = "state", type = URI.class), @VariableResult(name = "type", type = String.class), @@ -59,6 +61,10 @@ public class FullTextSearchResult implements HasIdentifier, HasTypes, Serializab @OWLAnnotationProperty(iri = RDFS.LABEL) private String label; + // This could be term definition, scope note or vocabulary description + @OWLAnnotationProperty(iri = DC.Terms.DESCRIPTION) + private String description; + @OWLDataProperty(iri = Vocabulary.ONTOLOGY_IRI_TERMIT + "/fts/snippet-text") private String snippetText; @@ -80,10 +86,11 @@ public class FullTextSearchResult implements HasIdentifier, HasTypes, Serializab public FullTextSearchResult() { } - public FullTextSearchResult(URI uri, String label, URI vocabulary, URI state, String type, String snippetField, - String snippetText, Double score) { + public FullTextSearchResult(URI uri, String label, String description, URI vocabulary, URI state, String type, + String snippetField, String snippetText, Double score) { this.uri = uri; this.label = label; + this.description = description; this.vocabulary = vocabulary; this.state = state; this.types = Collections.singleton(type); @@ -110,6 +117,14 @@ public void setLabel(String label) { this.label = label; } + public String getDescription() { + return description; + } + + public void setDescription(String description) { + this.description = description; + } + public URI getVocabulary() { return vocabulary; } diff --git a/src/main/java/cz/cvut/kbss/termit/dto/search/MatchType.java b/src/main/java/cz/cvut/kbss/termit/dto/search/MatchType.java index ca49cfb37..ea014ba25 100644 --- a/src/main/java/cz/cvut/kbss/termit/dto/search/MatchType.java +++ b/src/main/java/cz/cvut/kbss/termit/dto/search/MatchType.java @@ -27,7 +27,7 @@ public enum MatchType { IRI, /** * Matches the specified value as a substring of the string representation of a property value in the repository. - * + *

* Note that this match is not case-sensitive. */ SUBSTRING, diff --git a/src/main/java/cz/cvut/kbss/termit/dto/search/SearchParam.java b/src/main/java/cz/cvut/kbss/termit/dto/search/SearchParam.java index 207652649..1588a2211 100644 --- a/src/main/java/cz/cvut/kbss/termit/dto/search/SearchParam.java +++ b/src/main/java/cz/cvut/kbss/termit/dto/search/SearchParam.java @@ -89,10 +89,9 @@ public boolean equals(Object o) { if (this == o) { return true; } - if (!(o instanceof SearchParam)) { + if (!(o instanceof SearchParam that)) { return false; } - SearchParam that = (SearchParam) o; return Objects.equals(property, that.property) && Objects.equals(value, that.value) && matchType == that.matchType; } diff --git a/src/main/java/cz/cvut/kbss/termit/model/UserAccount.java b/src/main/java/cz/cvut/kbss/termit/model/UserAccount.java index 1de862f79..add68792d 100644 --- a/src/main/java/cz/cvut/kbss/termit/model/UserAccount.java +++ b/src/main/java/cz/cvut/kbss/termit/model/UserAccount.java @@ -26,6 +26,7 @@ import cz.cvut.kbss.termit.model.util.HasIdentifier; import cz.cvut.kbss.termit.model.util.HasTypes; import cz.cvut.kbss.termit.security.model.UserRole; +import cz.cvut.kbss.termit.util.Utils; import cz.cvut.kbss.termit.util.Vocabulary; import jakarta.validation.constraints.NotBlank; @@ -272,6 +273,6 @@ public int hashCode() { @Override public String toString() { - return "UserAccount{" + getFullName() + ", username='" + username + '\'' + '}'; + return "UserAccount{" + Utils.uriToString(getUri()) + getFullName() + ", username='" + username + '\'' + '}'; } } diff --git a/src/main/java/cz/cvut/kbss/termit/model/changetracking/AbstractChangeRecord.java b/src/main/java/cz/cvut/kbss/termit/model/changetracking/AbstractChangeRecord.java index 58d100b92..6595bf126 100644 --- a/src/main/java/cz/cvut/kbss/termit/model/changetracking/AbstractChangeRecord.java +++ b/src/main/java/cz/cvut/kbss/termit/model/changetracking/AbstractChangeRecord.java @@ -27,6 +27,7 @@ import cz.cvut.kbss.termit.model.Asset; import cz.cvut.kbss.termit.model.User; import cz.cvut.kbss.termit.util.Vocabulary; +import jakarta.annotation.Nonnull; import java.net.URI; import java.time.Instant; @@ -37,7 +38,7 @@ */ @JsonTypeInfo(use = JsonTypeInfo.Id.CLASS, property = "className") @OWLClass(iri = Vocabulary.s_c_zmena) -public class AbstractChangeRecord extends AbstractEntity { +public class AbstractChangeRecord extends AbstractEntity implements Comparable { @ParticipationConstraints(nonEmpty = true) @OWLDataProperty(iri = Vocabulary.s_p_ma_datum_a_cas_modifikace) @@ -106,4 +107,14 @@ public String toString() { ", author=" + author + ", changedEntity=" + changedEntity; } + + @Override + public int compareTo(@Nonnull AbstractChangeRecord o) { + final int timestampDiff = getTimestamp().compareTo(o.getTimestamp()); + if (timestampDiff != 0) { + return timestampDiff; + } + + return getUri().compareTo(o.getUri()); + } } diff --git a/src/main/java/cz/cvut/kbss/termit/model/changetracking/PersistChangeRecord.java b/src/main/java/cz/cvut/kbss/termit/model/changetracking/PersistChangeRecord.java index da51d2ebf..6fccde3d6 100644 --- a/src/main/java/cz/cvut/kbss/termit/model/changetracking/PersistChangeRecord.java +++ b/src/main/java/cz/cvut/kbss/termit/model/changetracking/PersistChangeRecord.java @@ -20,6 +20,7 @@ import cz.cvut.kbss.jopa.model.annotations.OWLClass; import cz.cvut.kbss.termit.model.Asset; import cz.cvut.kbss.termit.util.Vocabulary; +import jakarta.annotation.Nonnull; @OWLClass(iri = Vocabulary.s_c_vytvoreni_entity) public class PersistChangeRecord extends AbstractChangeRecord { @@ -35,4 +36,12 @@ public PersistChangeRecord(Asset changedAsset) { public String toString() { return "PersistChangeRecord{" + super.toString() + '}'; } + + @Override + public int compareTo(@Nonnull AbstractChangeRecord o) { + if (o instanceof UpdateChangeRecord) { + return -1; + } + return super.compareTo(o); + } } diff --git a/src/main/java/cz/cvut/kbss/termit/model/changetracking/UpdateChangeRecord.java b/src/main/java/cz/cvut/kbss/termit/model/changetracking/UpdateChangeRecord.java index ecbcc41ee..e1220f9f4 100644 --- a/src/main/java/cz/cvut/kbss/termit/model/changetracking/UpdateChangeRecord.java +++ b/src/main/java/cz/cvut/kbss/termit/model/changetracking/UpdateChangeRecord.java @@ -23,6 +23,7 @@ import cz.cvut.kbss.jopa.model.annotations.ParticipationConstraints; import cz.cvut.kbss.termit.model.Asset; import cz.cvut.kbss.termit.util.Vocabulary; +import jakarta.annotation.Nonnull; import java.net.URI; import java.util.Objects; @@ -98,4 +99,12 @@ public String toString() { "changedAttribute=" + changedAttribute + "}"; } + + @Override + public int compareTo(@Nonnull AbstractChangeRecord o) { + if (o instanceof PersistChangeRecord) { + return 1; + } + return super.compareTo(o); + } } diff --git a/src/main/java/cz/cvut/kbss/termit/persistence/dao/TermDao.java b/src/main/java/cz/cvut/kbss/termit/persistence/dao/TermDao.java index 50d138f16..052035b25 100644 --- a/src/main/java/cz/cvut/kbss/termit/persistence/dao/TermDao.java +++ b/src/main/java/cz/cvut/kbss/termit/persistence/dao/TermDao.java @@ -18,6 +18,7 @@ package cz.cvut.kbss.termit.persistence.dao; import cz.cvut.kbss.jopa.exceptions.NoResultException; +import cz.cvut.kbss.jopa.exceptions.NoUniqueResultException; import cz.cvut.kbss.jopa.model.EntityManager; import cz.cvut.kbss.jopa.model.query.TypedQuery; import cz.cvut.kbss.jopa.vocabulary.SKOS; @@ -63,6 +64,8 @@ public class TermDao extends BaseAssetDao implements SnapshotProvider { private static final URI LABEL_PROP = URI.create(SKOS.PREF_LABEL); + private static final URI TERM_FROM_VOCABULARY = URI.create( + cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku); private final Cache> subTermsCache; @@ -87,9 +90,25 @@ protected URI labelProperty() { @Override public Optional find(URI id) { - final Optional result = super.find(id); - result.ifPresent(this::postLoad); - return result; + try { + final Optional result = Optional.ofNullable( + em.find(Term.class, id, descriptorFactory.termDescriptor(resolveVocabularyId(id)))); + result.ifPresent(this::postLoad); + return result; + } catch (RuntimeException e) { + throw new PersistenceException(e); + } + } + + private URI resolveVocabularyId(URI termId) { + try { + return em.createNativeQuery("SELECT DISTINCT ?v WHERE { ?t ?inVocabulary ?v . }", URI.class) + .setParameter("t", termId) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) + .getSingleResult(); + } catch (NoResultException | NoUniqueResultException e) { + throw new PersistenceException("Unable to resolve term vocabulary.", e); + } } private void postLoad(Term r) { @@ -282,9 +301,7 @@ public List findAll(Vocabulary vocabulary) { .setParameter("type", typeUri) .setParameter("vocabulary", vocabulary.getUri()) .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", - URI.create( - cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("labelLang", config.getLanguage())); } catch (RuntimeException e) { throw new PersistenceException(e); @@ -322,9 +339,7 @@ public List findAllFull(Vocabulary vocabulary) { .setParameter("context", context(vocabulary)) .setParameter("vocabulary", vocabulary.getUri()) .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", - URI.create( - cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("labelLang", config.getLanguage()).getResultList(); return termIris.stream().map(ti -> { final Term t = find(ti).get(); @@ -378,9 +393,7 @@ public List findAllIncludingImported(Vocabulary vocabulary) { "} ORDER BY " + orderSentence("?label"), TermDto.class) .setParameter("type", typeUri) .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", - URI.create( - cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("imports", URI.create( cz.cvut.kbss.termit.util.Vocabulary.s_p_importuje_slovnik)) @@ -604,8 +617,7 @@ public List findAll(String searchString, Vocabulary vocabulary) { .setParameter("type", typeUri) .setParameter("context", context(vocabulary)) .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", URI.create( - cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("vocabulary", vocabulary.getUri()) .setParameter("searchString", searchString, config.getLanguage()); try { @@ -635,8 +647,7 @@ public List findAll(String searchString) { TermDto.class) .setParameter("type", typeUri) .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", URI.create( - cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("snapshot", URI.create( cz.cvut.kbss.termit.util.Vocabulary.s_c_verze_pojmu)) .setParameter("searchString", searchString, config.getLanguage()); @@ -679,8 +690,7 @@ public List findAllIncludingImported(String searchString, Vocabulary vo TermDto.class) .setParameter("type", typeUri) .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", URI.create( - cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("imports", URI.create( cz.cvut.kbss.termit.util.Vocabulary.s_p_importuje_slovnik)) @@ -717,8 +727,7 @@ public boolean existsInVocabulary(String label, Vocabulary vocabulary, String la + "}", Boolean.class) .setParameter("type", typeUri) .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", - URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("vocabulary", vocabulary.getUri()) .setParameter("searchString", label, languageTag != null ? languageTag : config.getLanguage()).getSingleResult(); @@ -744,17 +753,17 @@ public Optional findIdentifierByLabel(String label, Vocabulary vocabulary, Objects.requireNonNull(vocabulary); try { return Optional.of(em.createNativeQuery("SELECT ?term { ?term a ?type ; " + - "?hasLabel ?label ;" + - "?inVocabulary ?vocabulary ." + - "FILTER (LCASE(?label) = LCASE(?searchString)) . " - + "}", URI.class) - .setParameter("type", typeUri) - .setParameter("hasLabel", LABEL_PROP) - .setParameter("inVocabulary", - URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) - .setParameter("vocabulary", vocabulary.getUri()) - .setParameter("searchString", label, - languageTag != null ? languageTag : config.getLanguage()).getSingleResult()); + "?hasLabel ?label ;" + + "?inVocabulary ?vocabulary ." + + "FILTER (LCASE(?label) = LCASE(?searchString)) . " + + "}", URI.class) + .setParameter("type", typeUri) + .setParameter("hasLabel", LABEL_PROP) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) + .setParameter("vocabulary", vocabulary.getUri()) + .setParameter("searchString", label, + languageTag != null ? languageTag : config.getLanguage()) + .getSingleResult()); } catch (NoResultException e) { return Optional.empty(); } catch (RuntimeException e) { @@ -776,9 +785,7 @@ public List findAllUnused(Vocabulary vocabulary) { + "}", URI.class) .setParameter("vocabulary", vocabulary.getUri()) - .setParameter("inVocabulary", - URI.create( - cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("inVocabulary", TERM_FROM_VOCABULARY) .setParameter("hasTerm", URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_je_prirazenim_termu)) .setParameter("hasTarget", URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_ma_cil)) .setParameter("hasSource", URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_ma_zdroj)) diff --git a/src/main/java/cz/cvut/kbss/termit/persistence/dao/VocabularyDao.java b/src/main/java/cz/cvut/kbss/termit/persistence/dao/VocabularyDao.java index 21e5233f4..d0cd42ea8 100644 --- a/src/main/java/cz/cvut/kbss/termit/persistence/dao/VocabularyDao.java +++ b/src/main/java/cz/cvut/kbss/termit/persistence/dao/VocabularyDao.java @@ -19,6 +19,7 @@ import cz.cvut.kbss.jopa.model.EntityManager; import cz.cvut.kbss.jopa.model.query.Query; +import cz.cvut.kbss.jopa.model.query.TypedQuery; import cz.cvut.kbss.jopa.vocabulary.DC; import cz.cvut.kbss.jopa.vocabulary.SKOS; import cz.cvut.kbss.termit.asset.provenance.ModifiesData; @@ -35,6 +36,7 @@ import cz.cvut.kbss.termit.model.Glossary; import cz.cvut.kbss.termit.model.Term; import cz.cvut.kbss.termit.model.Vocabulary; +import cz.cvut.kbss.termit.model.changetracking.AbstractChangeRecord; import cz.cvut.kbss.termit.model.resource.Document; import cz.cvut.kbss.termit.model.util.EntityToOwlClassMapper; import cz.cvut.kbss.termit.model.validation.ValidationResult; @@ -51,6 +53,7 @@ import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.ApplicationContext; import org.springframework.context.event.EventListener; +import org.springframework.data.domain.Pageable; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional; @@ -388,6 +391,41 @@ public List getChangesOfContent(Vocabulary vocabulary) { return result; } + /** + * Gets content change records of the specified vocabulary. + * + * @param vocabulary Vocabulary whose content changes to get + * @param pageReq Specification of the size and number of the page to return + * @return List of change records, ordered by date in descending order + */ + public List getDetailedHistoryOfContent(Vocabulary vocabulary, Pageable pageReq) { + Objects.requireNonNull(vocabulary); + return createDetailedContentChangesQuery(vocabulary, pageReq).getResultList(); + } + + private TypedQuery createDetailedContentChangesQuery(Vocabulary vocabulary, Pageable pageReq) { + return em.createNativeQuery(""" + SELECT ?record WHERE { + ?term ?inVocabulary ?vocabulary ; + a ?termType . + ?record a ?changeRecord ; + ?relatesTo ?term ; + ?hasTime ?timestamp . + OPTIONAL { ?record ?hasChangedAttribute ?attribute . } + } ORDER BY DESC(?timestamp) ?attribute + """, AbstractChangeRecord.class) + .setParameter("inVocabulary", + URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_je_pojmem_ze_slovniku)) + .setParameter("vocabulary", vocabulary) + .setParameter("termType", URI.create(SKOS.CONCEPT)) + .setParameter("changeRecord", URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_c_zmena)) + .setParameter("relatesTo", URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_ma_zmenenou_entitu)) + .setParameter("hasTime", URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_ma_datum_a_cas_modifikace)) + .setParameter("hasChangedAttribute", URI.create(cz.cvut.kbss.termit.util.Vocabulary.s_p_ma_zmeneny_atribut)) + .setFirstResult((int) pageReq.getOffset()) + .setMaxResults(pageReq.getPageSize()); + } + private Query createContentChangesQuery(Vocabulary vocabulary) { return em.createNativeQuery(CONTENT_CHANGES_QUERY, "AggregatedChangeInfo") .setParameter("hasEntity", diff --git a/src/main/java/cz/cvut/kbss/termit/rest/VocabularyController.java b/src/main/java/cz/cvut/kbss/termit/rest/VocabularyController.java index c03272516..e8cd5afb4 100644 --- a/src/main/java/cz/cvut/kbss/termit/rest/VocabularyController.java +++ b/src/main/java/cz/cvut/kbss/termit/rest/VocabularyController.java @@ -33,6 +33,7 @@ import cz.cvut.kbss.termit.service.IdentifierResolver; import cz.cvut.kbss.termit.service.business.VocabularyService; import cz.cvut.kbss.termit.util.Configuration; +import cz.cvut.kbss.termit.util.Constants; import cz.cvut.kbss.termit.util.Constants.QueryParams; import cz.cvut.kbss.termit.util.TypeAwareResource; import io.swagger.v3.oas.annotations.Operation; @@ -44,6 +45,7 @@ import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; +import org.springframework.data.domain.Pageable; import org.springframework.http.HttpHeaders; import org.springframework.http.HttpStatus; import org.springframework.http.MediaType; @@ -69,6 +71,8 @@ import java.util.List; import java.util.Optional; +import static cz.cvut.kbss.termit.rest.util.RestUtils.createPageRequest; + /** * Vocabulary management REST API. *

@@ -80,6 +84,8 @@ @RequestMapping("/vocabularies") @PreAuthorize("hasRole('" + SecurityConstants.ROLE_RESTRICTED_USER + "')") public class VocabularyController extends BaseController { + static final String DEFAULT_PAGE_SIZE = "10"; + static final String DEFAULT_PAGE = "0"; private static final Logger LOG = LoggerFactory.getLogger(VocabularyController.class); @@ -283,6 +289,28 @@ public List getHistoryOfContent( return vocabularyService.getChangesOfContent(vocabulary); } + @Operation(security = {@SecurityRequirement(name = "bearer-key")}, + description = "Gets a list of changes made to the content of the vocabulary (term creation, editing).") + @ApiResponses({@ApiResponse(responseCode = "200", description = "List of change records."), + @ApiResponse(responseCode = "404", description = ApiDoc.ID_NOT_FOUND_DESCRIPTION)}) + @GetMapping(value = "/{localName}/history-of-content/detail", + produces = {MediaType.APPLICATION_JSON_VALUE, JsonLd.MEDIA_TYPE}) + public List getDetailedHistoryOfContent( + @Parameter(description = ApiDoc.ID_LOCAL_NAME_DESCRIPTION, + example = ApiDoc.ID_LOCAL_NAME_EXAMPLE) @PathVariable String localName, + @Parameter(description = ApiDoc.ID_NAMESPACE_DESCRIPTION, + example = ApiDoc.ID_NAMESPACE_EXAMPLE) @RequestParam(name = QueryParams.NAMESPACE, + required = false) Optional namespace, + @Parameter(description = ApiDocConstants.PAGE_SIZE_DESCRIPTION) @RequestParam( + name = Constants.QueryParams.PAGE_SIZE, required = false, + defaultValue = DEFAULT_PAGE_SIZE) Integer pageSize, + @Parameter(description = ApiDocConstants.PAGE_NO_DESCRIPTION) @RequestParam( + name = Constants.QueryParams.PAGE, required = false, defaultValue = DEFAULT_PAGE) Integer pageNo) { + final Pageable pageReq = createPageRequest(pageSize, pageNo); + final Vocabulary vocabulary = vocabularyService.getReference(resolveVocabularyUri(localName, namespace)); + return vocabularyService.getDetailedHistoryOfContent(vocabulary, pageReq); + } + @Operation(security = {@SecurityRequirement(name = "bearer-key")}, description = "Updates metadata of vocabulary with the specified identifier.") @ApiResponses({ diff --git a/src/main/java/cz/cvut/kbss/termit/service/business/VocabularyService.java b/src/main/java/cz/cvut/kbss/termit/service/business/VocabularyService.java index 6d2d409ae..fe6d9b20a 100644 --- a/src/main/java/cz/cvut/kbss/termit/service/business/VocabularyService.java +++ b/src/main/java/cz/cvut/kbss/termit/service/business/VocabularyService.java @@ -56,6 +56,7 @@ import org.springframework.context.ApplicationEventPublisherAware; import org.springframework.context.annotation.Lazy; import org.springframework.context.event.EventListener; +import org.springframework.data.domain.Pageable; import org.springframework.security.access.prepost.PostAuthorize; import org.springframework.security.access.prepost.PostFilter; import org.springframework.security.access.prepost.PreAuthorize; @@ -123,9 +124,8 @@ public VocabularyService(VocabularyRepositoryService repositoryService, } /** - * Receives {@link VocabularyContentModifiedEvent} and triggers validation. - * The goal for this is to get the results cached and do not force users to wait for validation - * when they request it. + * Receives {@link VocabularyContentModifiedEvent} and triggers validation. The goal for this is to get the results + * cached and do not force users to wait for validation when they request it. */ @EventListener({VocabularyContentModifiedEvent.class, VocabularyCreatedEvent.class}) public void onVocabularyContentModified(VocabularyEvent event) { @@ -216,11 +216,26 @@ public Set getRelatedVocabularies(Vocabulary entity) { return repositoryService.getRelatedVocabularies(entity); } + /** + * Gets statements representing SKOS relationships between terms from the specified vocabulary and terms from other + * vocabularies. + * + * @param vocabulary Vocabulary whose terms' relationships to retrieve + * @return List of RDF statements + */ @PreAuthorize("@vocabularyAuthorizationService.canRead(#vocabulary)") public List getTermRelations(Vocabulary vocabulary) { return repositoryService.getTermRelations(vocabulary); } + /** + * Gets statements representing relationships between the specified vocabulary and other vocabularies. + *

+ * A selected set of relationships is excluded (for example, versioning relationships). + * + * @param vocabulary Vocabulary whose relationships to retrieve + * @return List of RDF statements + */ @PreAuthorize("@vocabularyAuthorizationService.canRead(#vocabulary)") public List getVocabularyRelations(Vocabulary vocabulary) { return repositoryService.getVocabularyRelations(vocabulary, VOCABULARY_REMOVAL_IGNORED_RELATIONS); @@ -297,6 +312,17 @@ public List getChangesOfContent(Vocabulary vocabulary) { return repositoryService.getChangesOfContent(vocabulary); } + /** + * Gets content change records of the specified vocabulary. + * + * @param vocabulary Vocabulary whose content changes to get + * @param pageReq Specification of the size and number of the page to return + * @return List of change records, ordered by date in descending order + */ + public List getDetailedHistoryOfContent(Vocabulary vocabulary, Pageable pageReq) { + return repositoryService.getDetailedHistoryOfContent(vocabulary, pageReq); + } + /** * Runs text analysis on the definitions of all terms in the specified vocabulary, including terms in the * transitively imported vocabularies. diff --git a/src/main/java/cz/cvut/kbss/termit/service/config/ConfigurationProvider.java b/src/main/java/cz/cvut/kbss/termit/service/config/ConfigurationProvider.java index 2673d7391..084e75f56 100644 --- a/src/main/java/cz/cvut/kbss/termit/service/config/ConfigurationProvider.java +++ b/src/main/java/cz/cvut/kbss/termit/service/config/ConfigurationProvider.java @@ -59,6 +59,7 @@ public ConfigurationDto getConfiguration() { result.setRoles(new HashSet<>(service.findAll())); result.setMaxFileUploadSize(maxFileUploadSize); result.setVersionSeparator(config.getNamespace().getSnapshot().getSeparator()); + result.setModelingToolUrl(config.getModelingToolUrl()); return result; } } diff --git a/src/main/java/cz/cvut/kbss/termit/service/document/html/UnconfirmedTermOccurrenceRemover.java b/src/main/java/cz/cvut/kbss/termit/service/document/html/UnconfirmedTermOccurrenceRemover.java index 60baf9437..1292e0d68 100644 --- a/src/main/java/cz/cvut/kbss/termit/service/document/html/UnconfirmedTermOccurrenceRemover.java +++ b/src/main/java/cz/cvut/kbss/termit/service/document/html/UnconfirmedTermOccurrenceRemover.java @@ -3,10 +3,13 @@ import cz.cvut.kbss.termit.exception.FileContentProcessingException; import cz.cvut.kbss.termit.util.TypeAwareByteArrayResource; import cz.cvut.kbss.termit.util.TypeAwareResource; +import jakarta.annotation.Nonnull; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Node; import org.jsoup.select.Elements; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; import java.io.BufferedWriter; import java.io.ByteArrayOutputStream; @@ -19,6 +22,7 @@ */ public class UnconfirmedTermOccurrenceRemover { + private static final Logger LOG = LoggerFactory.getLogger(UnconfirmedTermOccurrenceRemover.class); /** * Removes unconfirmed term occurrences from the specified input. @@ -31,7 +35,8 @@ public class UnconfirmedTermOccurrenceRemover { * @param input Input to process * @return Processed content */ - public TypeAwareResource removeUnconfirmedOccurrences(TypeAwareResource input) { + public TypeAwareResource removeUnconfirmedOccurrences(@Nonnull TypeAwareResource input) { + LOG.trace("Removing unconfirmed occurrences from file {}.", input.getFilename()); try { final Document doc = Jsoup.parse(input.getInputStream(), StandardCharsets.UTF_8.name(), ""); doc.outputSettings().prettyPrint(false); diff --git a/src/main/java/cz/cvut/kbss/termit/service/repository/VocabularyRepositoryService.java b/src/main/java/cz/cvut/kbss/termit/service/repository/VocabularyRepositoryService.java index 9d055b183..6be0b86d4 100644 --- a/src/main/java/cz/cvut/kbss/termit/service/repository/VocabularyRepositoryService.java +++ b/src/main/java/cz/cvut/kbss/termit/service/repository/VocabularyRepositoryService.java @@ -29,6 +29,7 @@ import cz.cvut.kbss.termit.model.Glossary; import cz.cvut.kbss.termit.model.Model; import cz.cvut.kbss.termit.model.Vocabulary; +import cz.cvut.kbss.termit.model.changetracking.AbstractChangeRecord; import cz.cvut.kbss.termit.model.resource.Document; import cz.cvut.kbss.termit.model.validation.ValidationResult; import cz.cvut.kbss.termit.persistence.dao.BaseAssetDao; @@ -52,6 +53,7 @@ import org.springframework.cache.annotation.CacheConfig; import org.springframework.cache.annotation.CacheEvict; import org.springframework.cache.annotation.Cacheable; +import org.springframework.data.domain.Pageable; import org.springframework.security.access.prepost.PreAuthorize; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; @@ -218,6 +220,18 @@ public List getChangesOfContent(Vocabulary vocabulary) { return vocabularyDao.getChangesOfContent(vocabulary); } + /** + * Gets content change records of the specified vocabulary. + * + * @param vocabulary Vocabulary whose content changes to get + * @param pageReq Specification of the size and number of the page to return + * @return List of change records, ordered by date in descending order + */ + @Transactional(readOnly = true) + public List getDetailedHistoryOfContent(Vocabulary vocabulary, Pageable pageReq) { + return vocabularyDao.getDetailedHistoryOfContent(vocabulary, pageReq); + } + @CacheEvict(allEntries = true) @Transactional public Vocabulary importVocabulary(boolean rename, MultipartFile file) { diff --git a/src/main/java/cz/cvut/kbss/termit/util/Configuration.java b/src/main/java/cz/cvut/kbss/termit/util/Configuration.java index e066fe444..8a655df59 100644 --- a/src/main/java/cz/cvut/kbss/termit/util/Configuration.java +++ b/src/main/java/cz/cvut/kbss/termit/util/Configuration.java @@ -49,6 +49,14 @@ public class Configuration { * It is used, for example, for links in emails sent to users. */ private String url = "http://localhost:3000/#"; + + /** + * URL of the modeling tool. + *

+ * The modeling tool can be used to further specify the relationships between terms. + */ + private String modelingToolUrl; + /** * Name of the JMX bean exported by TermIt. *

@@ -88,6 +96,7 @@ public class Configuration { *

* By default, generated identifiers may contain accented characters (like č). Setting this configuration to * {@code true} ensures all generated identifiers are ASCII-only and accented character are normalized to ASCII. + * * @configurationdoc.default false */ private boolean asciiIdentifiers = false; @@ -139,6 +148,14 @@ public void setUrl(String url) { this.url = url; } + public String getModelingToolUrl() { + return modelingToolUrl; + } + + public void setModelingToolUrl(String modelingToolUrl) { + this.modelingToolUrl = modelingToolUrl; + } + public String getJmxBeanName() { return jmxBeanName; } @@ -668,8 +685,6 @@ public static class TextAnalysis { @Min(8) private int textQuoteSelectorContextLength = 32; - private boolean disableVocabularyAnalysisOnTermEdit = false; - public String getUrl() { return url; } @@ -693,14 +708,6 @@ public int getTextQuoteSelectorContextLength() { public void setTextQuoteSelectorContextLength(int textQuoteSelectorContextLength) { this.textQuoteSelectorContextLength = textQuoteSelectorContextLength; } - - public boolean isDisableVocabularyAnalysisOnTermEdit() { - return disableVocabularyAnalysisOnTermEdit; - } - - public void setDisableVocabularyAnalysisOnTermEdit(boolean disableVocabularyAnalysisOnTermEdit) { - this.disableVocabularyAnalysisOnTermEdit = disableVocabularyAnalysisOnTermEdit; - } } @Validated diff --git a/src/main/resources/org/apache/tika/mime/custom-mimetypes.xml b/src/main/resources/custom-mimetypes.xml similarity index 100% rename from src/main/resources/org/apache/tika/mime/custom-mimetypes.xml rename to src/main/resources/custom-mimetypes.xml diff --git a/src/main/resources/query/fulltextsearch.rq b/src/main/resources/query/fulltextsearch.rq index b6a2255b6..fc4050588 100644 --- a/src/main/resources/query/fulltextsearch.rq +++ b/src/main/resources/query/fulltextsearch.rq @@ -15,15 +15,26 @@ SELECT ?entity ?label ?vocabularyUri ?state ?type ?snippetField ?snippetText WHE skos:prefLabel ?label ; ?inVocabulary ?vocabularyUri . OPTIONAL { ?entity ?hasState ?state . } + OPTIONAL { + ?entity skos:definition ?definition . + } + OPTIONAL { + ?entity skos:scopeNote ?scopeNote . + } BIND (?term as ?type) . } UNION { ?entity a ?vocabulary ; dc:title ?label . + OPTIONAL { + ?entity dc:description ?dcDescription . + } BIND (?vocabulary as ?type) . } BIND (?label as ?snippetText) . - BIND (str("label") as ?snippetField) . + BIND (str("prefLabel") as ?snippetField) . FILTER CONTAINS(LCASE(?label), LCASE(?searchString)) . FILTER (lang(?label) = ?langTag) FILTER NOT EXISTS { ?entity a ?snapshot . } + BIND(COALESCE(?definition, COALESCE(?scopeNote, ?dcDescription)) AS ?description) + FILTER (!BOUND(?description) || lang(?description) = ?langTag) } ORDER BY ?label diff --git a/src/test/java/cz/cvut/kbss/termit/persistence/dao/TermDaoTest.java b/src/test/java/cz/cvut/kbss/termit/persistence/dao/TermDaoTest.java index 461f2576b..036c8bcf4 100644 --- a/src/test/java/cz/cvut/kbss/termit/persistence/dao/TermDaoTest.java +++ b/src/test/java/cz/cvut/kbss/termit/persistence/dao/TermDaoTest.java @@ -1049,8 +1049,12 @@ void subTermLoadingSortsThemByLabel() { transactional(() -> { vocabulary.getGlossary().addRootTerm(parent); em.merge(vocabulary.getGlossary(), descriptorFactory.glossaryDescriptor(vocabulary)); + Generator.addTermInVocabularyRelationship(parent, vocabulary.getUri(), em); em.persist(parent, descriptorFactory.termDescriptor(vocabulary)); - children.forEach(child -> em.persist(child, descriptorFactory.termDescriptor(vocabulary))); + children.forEach(child -> { + em.persist(child, descriptorFactory.termDescriptor(vocabulary)); + Generator.addTermInVocabularyRelationship(child, vocabulary.getUri(), em); + }); }); children.sort(Comparator.comparing(child -> child.getLabel().get(Environment.LANGUAGE))); @@ -1372,4 +1376,22 @@ void findIdentifierByLabelReturnsEmptyOptionalIfNoTermIsFound() { final Optional result = sut.findIdentifierByLabel("foo", vocabulary, Environment.LANGUAGE); assertFalse(result.isPresent()); } + + @Test + void findByIdLoadsTermFromVocabularyContextOnly() { + final Term term = Generator.generateTermWithId(vocabulary.getUri()); + addTermsAndSave(List.of(term), vocabulary); + final String property = "http://onto.fel.cvut.cz/ontologies/application/ontoGrapher/scheme"; + transactional(() -> { + try (final RepositoryConnection con = em.unwrap(Repository.class).getConnection()) { + final ValueFactory vf = con.getValueFactory(); + con.add(vf.createStatement(vf.createIRI(term.getUri().toString()), vf.createIRI(property), + vf.createIRI(vocabulary.getGlossary().getUri().toString()), vf.createIRI(Generator.generateUriString()))); + } + }); + + final Optional result = sut.find(term.getUri()); + assertTrue(result.isPresent()); + assertFalse(result.get().getProperties().containsKey(property)); + } } diff --git a/src/test/java/cz/cvut/kbss/termit/rest/SearchControllerTest.java b/src/test/java/cz/cvut/kbss/termit/rest/SearchControllerTest.java index d8abed694..a25fb2c9c 100644 --- a/src/test/java/cz/cvut/kbss/termit/rest/SearchControllerTest.java +++ b/src/test/java/cz/cvut/kbss/termit/rest/SearchControllerTest.java @@ -75,7 +75,7 @@ void setUp() { void fullTextSearchExecutesSearchOnService() throws Exception { final List expected = Collections .singletonList( - new FullTextSearchResult(Generator.generateUri(), "test", null, null, SKOS.CONCEPT, + new FullTextSearchResult(Generator.generateUri(), "test", null, null, null, SKOS.CONCEPT, "test", "test", 1.0)); when(searchServiceMock.fullTextSearch(any())).thenReturn(expected); final String searchString = "test"; @@ -95,7 +95,7 @@ void fullTextSearchExecutesSearchOnService() throws Exception { void fullTextSearchOfTermsWithoutVocabularySpecificationExecutesSearchOnService() throws Exception { final URI vocabularyIri = URI.create("https://test.org/vocabulary"); final List expected = Collections - .singletonList(new FullTextSearchResult(Generator.generateUri(), "test", vocabularyIri, null, + .singletonList(new FullTextSearchResult(Generator.generateUri(), "test", "Term definition", vocabularyIri, null, SKOS.CONCEPT, "test", "test", 1.0)); when(searchServiceMock.fullTextSearchOfTerms(any(), any())).thenReturn(expected); final String searchString = "test"; diff --git a/src/test/java/cz/cvut/kbss/termit/service/business/SearchServiceTest.java b/src/test/java/cz/cvut/kbss/termit/service/business/SearchServiceTest.java index a4e7fcccb..9c8c8d6dc 100644 --- a/src/test/java/cz/cvut/kbss/termit/service/business/SearchServiceTest.java +++ b/src/test/java/cz/cvut/kbss/termit/service/business/SearchServiceTest.java @@ -68,6 +68,7 @@ void fullTextSearchFiltersResultsFromNonMatchingVocabularies() { final FullTextSearchResult ftsr = new FullTextSearchResult( Generator.generateUri(), "test", + "Term definition", vocabulary, null, SKOS.CONCEPT, @@ -88,6 +89,7 @@ void fullTextSearchReturnsResultsFromMatchingVocabularies() { final FullTextSearchResult ftsr = new FullTextSearchResult( Generator.generateUri(), "test", + "Term definition", vocabulary, Generator.generateUri(), SKOS.CONCEPT, diff --git a/src/test/java/cz/cvut/kbss/termit/service/document/html/HtmlTermOccurrenceResolverTest.java b/src/test/java/cz/cvut/kbss/termit/service/document/html/HtmlTermOccurrenceResolverTest.java index fdcfd3ed8..f17c72794 100644 --- a/src/test/java/cz/cvut/kbss/termit/service/document/html/HtmlTermOccurrenceResolverTest.java +++ b/src/test/java/cz/cvut/kbss/termit/service/document/html/HtmlTermOccurrenceResolverTest.java @@ -32,8 +32,6 @@ import org.jsoup.Jsoup; import org.jsoup.select.Elements; import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.condition.DisabledOnOs; -import org.junit.jupiter.api.condition.OS; import org.junit.jupiter.api.extension.ExtendWith; import org.mockito.InjectMocks; import org.mockito.Mock; @@ -180,13 +178,12 @@ void findTermOccurrencesMarksOccurrencesAsSuggested() { } @Test - @DisabledOnOs(OS.WINDOWS) // TODO: https://github.com/kbss-cvut/termit/issues/275 void findTermOccurrencesSetsFoundOccurrencesAsApprovedWhenCorrespondingExistingOccurrenceWasApproved() throws Exception { when(termService.exists(TERM_URI)).thenReturn(true); final File file = initFile(); final TermOccurrence existing = Generator.generateTermOccurrence(new Term(TERM_URI), file, false); final Selector quoteSelector = new TextQuoteSelector("Územní plán", "RDFa simple", "hlavního města Prahy."); - final Selector posSelector = new TextPositionSelector(21, 32); + final Selector posSelector = new TextPositionSelector(29, 40); existing.getTarget().setSelectors(Set.of(quoteSelector, posSelector)); final InputStream is = cz.cvut.kbss.termit.environment.Environment.loadFile("data/rdfa-simple.html"); sut.parseContent(is, file); diff --git a/src/test/java/cz/cvut/kbss/termit/service/repository/TermRepositoryServiceTest.java b/src/test/java/cz/cvut/kbss/termit/service/repository/TermRepositoryServiceTest.java index 64c58367c..8b0da9488 100644 --- a/src/test/java/cz/cvut/kbss/termit/service/repository/TermRepositoryServiceTest.java +++ b/src/test/java/cz/cvut/kbss/termit/service/repository/TermRepositoryServiceTest.java @@ -501,7 +501,7 @@ private void generateRelatedInverse(Term term, Term related, String property) { try (final RepositoryConnection conn = repo.getConnection()) { final ValueFactory vf = conn.getValueFactory(); conn.add(vf.createIRI(related.getUri().toString()), vf.createIRI(property), vf - .createIRI(term.getUri().toString())); + .createIRI(term.getUri().toString()), vf.createIRI(related.getVocabulary().toString())); } } diff --git a/src/test/java/cz/cvut/kbss/termit/service/repository/UserRepositoryServiceTest.java b/src/test/java/cz/cvut/kbss/termit/service/repository/UserRepositoryServiceTest.java index 6b7126ad9..bcdbccaff 100644 --- a/src/test/java/cz/cvut/kbss/termit/service/repository/UserRepositoryServiceTest.java +++ b/src/test/java/cz/cvut/kbss/termit/service/repository/UserRepositoryServiceTest.java @@ -17,26 +17,6 @@ */ package cz.cvut.kbss.termit.service.repository; -import cz.cvut.kbss.termit.environment.Environment; -import cz.cvut.kbss.termit.environment.Generator; -import cz.cvut.kbss.termit.exception.ValidationException; -import cz.cvut.kbss.termit.model.UserAccount; -import cz.cvut.kbss.termit.persistence.dao.UserAccountDao; -import cz.cvut.kbss.termit.service.IdentifierResolver; -import cz.cvut.kbss.termit.util.Configuration; -import cz.cvut.kbss.termit.util.Vocabulary; -import jakarta.validation.Validation; -import jakarta.validation.Validator; -import org.junit.jupiter.api.Test; -import org.junit.jupiter.api.extension.ExtendWith; -import org.mockito.ArgumentCaptor; -import org.mockito.InjectMocks; -import org.mockito.Mock; -import org.mockito.Spy; -import org.mockito.junit.jupiter.MockitoExtension; -import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; -import org.springframework.security.crypto.password.PasswordEncoder; - import java.net.URI; import java.util.Optional; @@ -47,10 +27,30 @@ import static org.junit.jupiter.api.Assertions.assertNull; import static org.junit.jupiter.api.Assertions.assertThrows; import static org.junit.jupiter.api.Assertions.assertTrue; -import static org.mockito.Mockito.any; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.ArgumentCaptor; +import static org.mockito.ArgumentMatchers.any; +import org.mockito.InjectMocks; +import org.mockito.Mock; import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; +import org.mockito.Spy; +import org.mockito.junit.jupiter.MockitoExtension; +import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; +import org.springframework.security.crypto.password.PasswordEncoder; + +import cz.cvut.kbss.termit.environment.Environment; +import cz.cvut.kbss.termit.environment.Generator; +import cz.cvut.kbss.termit.exception.ValidationException; +import cz.cvut.kbss.termit.model.UserAccount; +import cz.cvut.kbss.termit.persistence.dao.UserAccountDao; +import cz.cvut.kbss.termit.service.IdentifierResolver; +import cz.cvut.kbss.termit.util.Configuration; +import cz.cvut.kbss.termit.util.Vocabulary; +import jakarta.validation.Validation; +import jakarta.validation.Validator; @ExtendWith(MockitoExtension.class) class UserRepositoryServiceTest { @@ -168,7 +168,7 @@ void updateThrowsValidationExceptionWhenUpdatedInstanceIsMissingValues() { user.setUsername(null); user.setPassword(null); // Simulate instance being loaded from repo final ValidationException ex = assertThrows(ValidationException.class, () -> sut.update(user)); - assertThat(ex.getMessage(), containsString("username must not be blank")); + assertThat(ex.getMessage(), containsString("username")); } @Test diff --git a/src/test/java/cz/cvut/kbss/termit/service/security/authorization/SearchAuthorizationServiceTest.java b/src/test/java/cz/cvut/kbss/termit/service/security/authorization/SearchAuthorizationServiceTest.java index 9ffe2c3f3..641c64716 100644 --- a/src/test/java/cz/cvut/kbss/termit/service/security/authorization/SearchAuthorizationServiceTest.java +++ b/src/test/java/cz/cvut/kbss/termit/service/security/authorization/SearchAuthorizationServiceTest.java @@ -45,10 +45,11 @@ class SearchAuthorizationServiceTest { @Test void canReadChecksIfVocabularyIsReadableForTermResult() { - final FullTextSearchResult res = new FullTextSearchResult(Generator.generateUri(), "test string", - Generator.generateUri(), Generator.generateUri(), - SKOS.CONCEPT, "label", "test", - (double) Generator.randomInt()); + final FullTextSearchResult res = new FullTextSearchResult(Generator.generateUri(), "Term label", + "Term definition", + Generator.generateUri(), Generator.generateUri(), + SKOS.CONCEPT, "label", "test", + (double) Generator.randomInt()); when(vocabularyAuthorizationService.canRead(any(Vocabulary.class))).thenReturn(true); assertTrue(sut.canRead(res)); verify(vocabularyAuthorizationService).canRead(new Vocabulary(res.getVocabulary())); @@ -56,11 +57,12 @@ void canReadChecksIfVocabularyIsReadableForTermResult() { @Test void canReadChecksIfVocabularyIsReadableForVocabularyResult() { - final FullTextSearchResult res = new FullTextSearchResult(Generator.generateUri(), "test label", - null, null, - cz.cvut.kbss.termit.util.Vocabulary.s_c_slovnik, - "label", "test", - (double) Generator.randomInt()); + final FullTextSearchResult res = new FullTextSearchResult(Generator.generateUri(), "Vocabulary title", + "Vocabulary description", + null, null, + cz.cvut.kbss.termit.util.Vocabulary.s_c_slovnik, + "label", "test", + (double) Generator.randomInt()); assertFalse(sut.canRead(res)); verify(vocabularyAuthorizationService).canRead(new Vocabulary(res.getUri())); } diff --git a/src/test/resources/data/rdfa-simple.html b/src/test/resources/data/rdfa-simple.html index e57ed4f0a..2c0de15de 100644 --- a/src/test/resources/data/rdfa-simple.html +++ b/src/test/resources/data/rdfa-simple.html @@ -2,10 +2,10 @@ -RDFa simple + RDFa simple -Územní plán hlavního města Prahy.