This artifact provides a mechanism to invoke and query fabric chaincode using a REST-based API interface.
Additionally, it can also invoke chaincode using a asynchronous method and can publish chaincode events to Kafka/Event-Hub topics.
- Invoke Chaincode with REST.
- Query Chaincode with REST.
- Invoke Chaincode with Kafka/Event-Hub.
- Publish chaincode events from multiple channels to Kafka/Event-Hub.
- Fabric 2.x network.
- Connection Profile YAML file.
- Wallet (.id) file.
- Java and Maven is installed.
- (Optional) Kafka/Event-Hub configuration for invoking chaincode asynchronously.
- (Optional) Kafka/Event-Hub configuration for publishing chaincode events.
- Download/Clone the repository and build the project using
mvn clean install
- Create a folder wallet in the root directory of the project.
- If using fabric-getting-started script, note the path to CA Pem file for Org1.
Usually located infabric-getting-started/test-network/organizations/peerOrganizations/org1.example.com/ca
folder. - Open EnrollAdmin.java and set the
pemFilePath
variable with value noted above and run. This will createadmin.id
in the wallet folder. - Open RegisterUser and set the
pemFilePath
variable with value noted above and run. This will createclientUser.id
in the wallet folder. - Add the
connection-org1.yaml
file, located atfabric-getting-started/test-network/organizations/peerOrganizations/org1.example.com
to the wallet folder. - Make sure the Peer URL and CA URL in
connection-org1.yaml
are reachable.
If using fabric-getting-started, change the peer URL inconnection-org1.yaml
topeer0.org1.example.com:7051
and CA URL toca-org1:7054
. - Run, hlf.java.rest.client.FabricClientBootstrap java file or jar file.
- You can also run as container using
docker-compose up
.
If the fabric network is running local, make sure the docker-compose.yml file is configured to use the correct network.
networks:
default:
external:
name: <fabric's network>
This component supports event-based architecture by consuming transactions through Kafka & Azure EventHub. To configure it, use the below configuration in the application.yml file.
kafka:
integration-points:
-
brokerHost: <Hostname1 with Port>
groupId: <Group ID>
topic: <Topic Name>
-
brokerHost: <Hostname2 with Port>
groupId: <Group ID>
topic: <Topic Name>
# For Azure EventHub
jaasConfig: org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://<Hostname>/;SharedAccessKeyName=<key-name>;SharedAccessKey=<key-value>";
# For SOX compliant Kafka Clusters
ssl-enabled: true
security-protocol: SSL
ssl-keystore-location: <YOUR_SSL_KEYSTORE_PATH>
ssl-keystore-password: <YOUR_SSL_PASSWORDH>
ssl-truststore-location: <YOUR_SSL_TRUSTSTORE_PATH>
ssl-truststore-password: <YOUR_SSL_TRUSTSTORE_PASSWORD>
ssl-key-password: <YOUR_SSL_KEY_PASSWORD>
The component accepts JSON payload and 3 headers to invoke the chaincode. Please find below the keys for the headers:-
1. channel_name
2. function_name
3. chaincode_name
This component supports capturing chaincode events and publish it to Kafka or Azure EventHub. This can be useful for integrating with offchain DB. To configure it, use the below configuration in the application.yml file.
fabric:
events:
enabled: true
chaincode: mychannel1,mychannel2 #Comma-separated list for listening to events from multiple channels
kafka:
event-listener:
brokerHost: <Hostname with Port>
topic: <Topic Name>
# For Azure EventHub
jaasConfig: org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://<Hostname>/;SharedAccessKeyName=<key-name>;SharedAccessKey=<key-value>";
# For SOX compliant Kafka Clusters
ssl-enabled: true
security-protocol: SSL
ssl-keystore-location: <YOUR_SSL_KEYSTORE_PATH>
ssl-keystore-password: <YOUR_SSL_PASSWORDH>
ssl-truststore-location: <YOUR_SSL_TRUSTSTORE_PATH>
ssl-truststore-password: <YOUR_SSL_TRUSTSTORE_PASSWORD>
ssl-key-password: <YOUR_SSL_KEY_PASSWORD>
The component will send the same JSON payload sent by the chaincode and add the following headers.
1. fabric_tx_id
2. event_name
3. channel_name
4. event_type (value: chaincode_event)
This component supports capturing block events and publish it to Kafka or Azure EventHub. This can be useful for integrating with offchain DB where adding events to chaincode is not possible (for ex - Food-Trust anchor channel). To configure it, use the below configuration in the application.yml file.
fabric:
events:
enabled: true
block: mychannel1,mychannel2 #Comma-separated list for listening to events from multiple channels
kafka:
event-listener:
brokerHost: <Hostname with Port>
topic: <Topic Name>
# For Azure EventHub
jaasConfig: org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://<Hostname>/;SharedAccessKeyName=<key-name>;SharedAccessKey=<key-value>";
# For SOX compliant Kafka Clusters
ssl-enabled: true
security-protocol: SSL
ssl-keystore-location: <YOUR_SSL_KEYSTORE_PATH>
ssl-keystore-password: <YOUR_SSL_PASSWORDH>
ssl-truststore-location: <YOUR_SSL_TRUSTSTORE_PATH>
ssl-truststore-password: <YOUR_SSL_TRUSTSTORE_PASSWORD>
ssl-key-password: <YOUR_SSL_KEY_PASSWORD>
The component will send the same JSON payload sent by the chaincode and add the following headers.
1. fabric_tx_id
2. channel_name
3. chaincode name
4. function_name
5. event_type (value: block_event)