Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed some typos #2

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,31 +17,31 @@ Peltas reads Alfresco nodes data via the Alfresco existing REST APIs and maps th

No additional amps/jars or Alfresco customization is needed to run Peltas since it is an independant Spring Boot application.

Since Peltas was firstly built for Alfresco Audit logs data, the core engine is highly inspired by the data format of an audit log entry and therfore even the live workspace nodes are converted into a similar audit format while being processed.
Since Peltas was firstly built for Alfresco Audit logs data, the core engine is highly inspired by the data format of an audit log entry and therefor even the live workspace nodes are converted into a similar audit format while being processed.

Peltas also knows where to restart from, what was the last node processed is kept in the DB table named: peltas_timestamp.
It is important to understand that such data processing cannot be "parallelized" and therfore clustering is not os any help in speeding up the data processing. Peltas has a scheduler and it will run in a fixed delay way that can be configured by setting the property to the value that fits your setup
It is important to understand that such data processing cannot be "parallelized" and therefor clustering is not of any help in speeding up the data processing. Peltas has a scheduler and it will run in a fixed delay way that can be configured by setting the property to the value that fits your setup
- peltas.scheduler.fixedDelay=5000

Workspace nodes data
--
Nodes data are read from the Alfresco SOLR API. Just like the Alfresco search services do the indexing part, exactly the same services are used by Peltas and therfore no data is missed and everything is transactionally written the to DB storage.
Nodes data are read from the Alfresco SOLR API. Just like the Alfresco search services do the indexing part, exactly the same services are used by Peltas and therefor no data is missed and everything is transactionally written the to DB storage.

Alfresco Audit data
--
As of today this is not supported in the opensource community version of Peltas, but we are planning to add that to the community version too.

Cherry picking Alfresco nodes
--
Peltas implements an evaluator engine, where each Alfresco node can be tested in order to be processed by Peltas or not taken into consideration, this is done by configuring an evaluator. An evaluator could be configured with node content type, action type or aspects/metadata and all of thoe could be combined
Peltas implements an evaluator engine, where each Alfresco node can be tested in order to be processed by Peltas or not taken into consideration, this is done by configuring an evaluator. An evaluator could be configured with node content type, action type or aspects/metadata and all of those could be combined

peltas.handler.documentupdated.evaluator=/alfresco-workspace/transaction/action=NODE-UPDATED|/alfresco-workspace/transaction/type=cm:content

The next step is to configure the node metadata mapping and do the data conversions if necessary

peltas.handler.documentupdated.mapper.property.action.data=/alfresco-workspace/transaction/action

More information can be seen in the conifguration file at src/main/resources/io/peltas/peltas.properties and any of those could be overriden in Spring application.properties if required.
More information can be seen in the configuration file at src/main/resources/io/peltas/peltas.properties and any of those could be overriden in Spring application.properties if required.

# Run Locally
- git checkout
Expand Down Expand Up @@ -99,7 +99,7 @@ Any kind of BI tools with Database connectors can be used. For demos Power BI is
- you can change them and adapt to your specific schema requirements

# Custom Namespaces
A custom Alfresco namespace is not automatically updated in Peltas Community and therfore you have to do an insert of your custom in the "peltas_model_dim" table:
A custom Alfresco namespace is not automatically updated in Peltas Community and therefor you have to do an insert of your custom in the "peltas_model_dim" table:

* INSERT INTO peltas_model_dim(shortname, longname, modified) VALUES ('your_shortname','{your_localname}',NOW());

Expand Down