diff --git a/v1.3.3/releasenotes/index.html b/v1.3.3/releasenotes/index.html index 4c8ffbe7aa..19109dbbb7 100644 --- a/v1.3.3/releasenotes/index.html +++ b/v1.3.3/releasenotes/index.html @@ -3632,24 +3632,34 @@
What's New:
-- Add new interface for blockchain plugins to stream receipt notifications in transactional batches
- - For blockchain connectors that have an ack based reliable receipt stream (or other checkpoint system)
- - Allows strictly ordered delivery of receipts from blockchain plugins that support it
- - Allows resilience on receipt delivery to core, against a checkpoint maintained in the connector
-- Changes in metrics:
- - Added new metrics for Data Exchange for monitoring by a timeseries and alerting system.
- - ff_multiparty_node_identity_dx_mismatch notify that the certificate in FireFly Core is different to the one stored in Data Exchange
- - ff_multiparty_node_identity_dx_expiry_epoch emit the timestamp of the certificate of Data Exchange useful for SREs to monitor before it expires
- - Added a namespace label to existing metrics to separate metrics more easily
- - Added HTTP Response Time and Complete Gauge Support to firefly-common
- - Allow the metrics server to host additional routes such as status endpoints
- - This resulted in a new configuration section of monitoring to be more appropriate than metrics which has now be deprecated.
-- Fix to issue that resulted in retried private messages using local namespace rather than the network namespace
-- Fix to issue that could result in messages being marked Pending on re-delivery of a batch over the network
-- Miscellaneous bug fixes and minor improvements
-- Documentation updates, new troubleshooting section for multiparty messages
-- CVE fixes and adoption of OpenSSF scorecard on key repositories
What's New:
+ack based reliable receipt stream (or other checkpoint system)ff_multiparty_node_identity_dx_mismatch notify that the certificate in FireFly Core is different to the one stored in Data Exchangeff_multiparty_node_identity_dx_expiry_epoch emit the timestamp of the certificate of Data Exchange useful for SREs to monitor before it expires firefly-commonmetrics server to host additional routes such as status endpoints monitoring to be more appropriate than metrics which has now be deprecated. Pending on re-delivery of a batch over the networkAs part of the changes to the metrics to add the new namespace label, we changed from using a Prometheus Counter to a CounterVec. As a result there is no default value of 0 on the counter, which means when users query for a specific metric such as ff_message_rejected_total it will not be available until the CounterVec associated with that metric is incremented. This has been determined to be an easy upgrade for SRE monitoring these metrics, hence inclusion in a patch release.
Hyperledger FireFly is an open source Supernode, a complete stack for enterprises to build and scale secure Web3 applications.
The easiest way to understand a FireFly Supernode is to think of it like a toolbox. Connect your existing apps and/or back office systems to the toolbox and within it there are two different sets of tools. One set of tools helps you connect to the Web3 world that already exists, and the other set allows you to build new decentralized applications quickly with security and scalability.
Head to the Understanding FireFly section for more details.
"},{"location":"SUMMARY/","title":"SUMMARY","text":"Hyperledger FireFly has a multi-tier pluggable architecture for supporting blockchains of all shapes and sizes. This includes a remote API that allows a microservice connector to be built from scratch in any programming language.
It also includes the Connector Toolkit, which is a pluggable SDK in Golang that provides a set of re-usable modules that can be used across blockchain implementations.
This is the preferred way to build a new blockchain connector, if you are comfortable with coding in Golang and there are language bindings available for the raw RPC interface of your blockchain.
"},{"location":"architecture/blockchain_connector_framework/#connector-toolkit-architecture","title":"Connector Toolkit Architecture","text":"The core component of the FireFly Connector Framework for Blockchains is a Go module called FireFly Transaction Manager (FFTM).
FFTM is responsible for:
Submission of transactions to blockchains of all types
Protocol connectivity decoupled with additional lightweight API connector
Easy to add additional protocols that conform to normal patterns of TX submission / events
Monitoring and updating blockchain operations
Receipts
Confirmations
Extensible transaction handler with capabilities such as:
Nonce management: idempotent submission of transactions, and assignment of nonces
Transaction process history
Event streaming
The framework is currently constrained to blockchains that adhere to certain basic principals:
Has transactions
That are signed
That can optionally have gas semantics (limits and prices, expressed in a blockchain specific way)
Has events (or \"logs\")
That are emitted as a deterministic outcome of transactions
Has blocks
Containing zero or more transactions, with their associated events
With a parent hash
Has finality for transactions & events that can be expressed as a level of confidence over time
Confirmations: A number of sequential blocks in the canonical chain that contain the transaction
The nonces for transactions is assigned as early as possible in the flow:
This \"at source\" allocation of nonces provides the strictest assurance of order of transactions possible, because the order is locked in with the coordination of the business logic of the application submitting the transaction.
As well as protecting against loss of transactions, this protects against duplication of transactions - even in crash recovery scenarios with a sufficiently reliable persistence layer.
"},{"location":"architecture/blockchain_connector_framework/#avoid-multiple-nonce-management-systems-against-the-same-signing-key","title":"Avoid multiple nonce management systems against the same signing key","text":"FFTM is optimized for cases where all transactions for a given signing address flow through the same FireFly connector. If you have signing and nonce allocation happening elsewhere, not going through the FireFly blockchain connector, then it is possible that the same nonce will be allocated in two places.
Be careful that the signing keys for transactions you stream through the Nonce Management of the FireFly blockchain connector are not used elsewhere.
If you must have multiple systems performing nonce management against the same keys you use with FireFly nonce management, you can set the transactions.nonceStateTimeout to 0 (or a low threshold like 100ms) to cause the nonce management to query the pending transaction pool of the node every time a nonce is allocated.
This reduces the window for concurrent nonce allocation to be small (basically the same as if you had multiple simple web/mobile wallets used against the same key), but it does not eliminate it completely it.
"},{"location":"architecture/blockchain_connector_framework/#why-at-source-nonce-management-was-chosen-vs-at-target","title":"Why \"at source\" nonce management was chosen vs. \"at target\"","text":"The \"at source\" approach to ordering used in FFTM could be compared with the \"at target\" allocation of nonces used in EthConnect).
The \"at target\" approach optimizes for throughput and ability to send new transactions to the chain, with an at-least-once delivery assurance to the applications.
An \"at target\" algorithm as used in EthConnect could resume transaction delivery automatically without operator intervention from almost all scenarios, including where nonces have been double allocated.
However, \"at target\" comes with two compromises that mean FFTM chose the \"at source\" approach was chosen for FFTM:
Individual transactions might fail in certain scenarios, and subsequent transactions will still be streamed to the chain. While desirable for automation and throughput, this reduces the ordering guarantee for high value transactions.
In crash recovery scenarios the assurance is at-least-once delivery for \"at target\" ordering (rather than \"exactly once\"), although the window can be made very small through various optimizations included in the EthConnect codebase.
The transaction Handler is a pluggable component that allows customized logic to be applied to the gas pricing, signing, submission and re-submission of transactions to the blockchain.
The transaction Handler can store custom state in the state store of the FFTM code, which is also reported in status within the FireFly API/Explorer on the operation.
A reference implementation is provided that:
eth_gasPrice for EVM JSON/RPC)The reference implementation is available here
"},{"location":"architecture/blockchain_connector_framework/#event-streams","title":"Event Streams","text":"One of the largest pieces of heavy lifting code in the FFTM codebase, is the event stream support. This provides a WebSocket (and Webhook) interface that FireFly Core and the Tokens Connectors connect to in order to receive ordered streams of events from the blockchain.
The interface down to the blockchain layer is via go channels, and there are lifecycle interactions over the FFCAPI to the blockchain specific code to add and remove listeners for different types of blockchain events.
Some high architectural principals that informed the code:
One of the most important roles FireFly has, is to take actions being performed by the local apps, process them, get them confirmed, and then deliver back as \"stream of consciousness\" to the application alongside all the other events that are coming into the application from other FireFly Nodes in the network.
You might observe the problems solved in this architecture are similar to those in a message queuing system (like Apache Kafka, or a JMS/AMQP provider like ActiveMQ etc.).
However, we cannot directly replace the internal logic with such a runtime - because FireFly's job is to aggregate data from multiple runtimes that behave similarly to these:
So FireFly provides the convenient REST based management interface to simplify the world for application developers, by aggregating the data from multiple locations, and delivering it to apps in a deterministic sequence.
The sequence is made deterministic:
The core architecture of a FireFly node can be broken down into the following three areas:
What fundamentally is a node - left side of the above diagram.
What are the core runtime responsibilities, and pluggable elements - right side of the above diagram.
Connectors and Infrastructure Runtimes.Connectors are the bridging runtimes, that know how to talk to a particular runtime.Infrastructure Runtimes are the core runtimes for multi-party system activities.What is the code structure inside the core.
This demonstrates the problem that at its core FireFly is there to solve. The internal plumbing complexity of just a very simple set of Enterprise blockchain / multi-party system interactions.
This is the kind of thing that enterprise projects have been solving ground-up since the dawn of enterprise blockchain, and the level of engineering required that is completely detached from business value, is very high.
The \"tramlines\" view shows how FireFly's pluggable model makes the job of the developer really simple:
This is deliberately a simple flow, and all kinds of additional layers might well layer on (and fit within the FireFly model):
This diagram shows the various plugins that are currently in the codebase and the layers in each plugin
This diagram shows the details of what goes into each layer of a FireFly plugin
"},{"location":"architecture/plugin_architecture/#overview","title":"Overview","text":"The FireFly node is built for extensibility, with separate pluggable runtimes orchestrated into a common API for developers. The mechanics of that pluggability for developers of new connectors is explained below:
This architecture is designed to provide separations of concerns to account for:
We welcome anyone to contribute to the FireFly project! If you're interested, this is a guide on how to get started. You don't have to be a blockchain expert to make valuable contributions! There are lots of places for developers of all experience levels to get involved.
\ud83e\uddd1\ud83c\udffd\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udffb\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udffe\u200d\ud83d\udcbb \ud83e\uddd1\ud83c\udffb\u200d\ud83d\udcbb \ud83e\uddd1\ud83c\udfff\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udffd\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udffd\u200d\ud83d\udcbb \ud83e\uddd1\ud83c\udffe\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udfff\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udffe\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udfff\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udffb\u200d\ud83d\udcbb
"},{"location":"contributors/#connect-with-us-on-discord","title":"\ud83d\ude80 Connect with us on Discord","text":"You can chat with maintainers and other contributors on Discord in the firefly channel: https://discord.gg/hyperledger
Join Discord Server
"},{"location":"contributors/#join-our-community-calls","title":"\ud83d\udcc5 Join our Community Calls","text":"Community calls are a place to talk to other contributors, maintainers, and other people interested in FireFly. Maintainers often discuss upcoming changes and proposed new features on these calls. These calls are a great way for the community to give feedback on new ideas, ask questions about FireFly, and hear how others are using FireFly to solve real world problems.
Please see the FireFly Calendar for the current meeting schedule, and the link to join. Everyone is welcome to join, regardless of background or experience level.
"},{"location":"contributors/#find-your-first-issue","title":"\ud83d\udd0d Find your first issue","text":"If you're looking for somewhere to get started in the FireFly project and want something small and relatively easy, take a look at issues tagged with \"Good first issue\". You can definitely work on other things if you want to. These are only suggestions for easy places to get started.
See \"Good First Issues\"
NOTE Hyperledger FireFly has a microservice architecture so it has many different GitHub repos. Use the link or the button above to look for \"Good First Issues\" across all the repos at once.
Here are some other suggestions of places to get started, based on experience you may already have:
"},{"location":"contributors/#any-level-of-experience","title":"Any level of experience","text":"If you looking to make your first open source contribution the FireFly documentation is a great place to make small, easy improvements. These improvements are also very valuable, because they help the next person that may want to know the same thing.
Here are some detailed instructions on Contributing to Documentation
"},{"location":"contributors/#go-experience","title":"Go experience","text":"If you have some experience in Go and really want to jump into FireFly, the FireFly Core is the heart of the project.
Here are some detailed instructions on Setting up a FireFly Core Development Environment.
"},{"location":"contributors/#little-or-no-go-experience-but-want-to-learn","title":"Little or no Go experience, but want to learn","text":"If you don't have a lot of experience with Go, but are interested in learning, the FireFly CLI might be a good place to start. The FireFly CLI is a tool to set up local instances of FireFly for building apps that use FireFly, and for doing development on FireFly itself.
"},{"location":"contributors/#typescript-experience","title":"TypeScript experience","text":"If you have some experience in TypeScript, there are several FireFly microservices that are written in TypeScript. The Data Exchange is used for private messaging between FireFly nodes. The ERC-20/ERC-271 Tokens Connector and ERC-1155 Tokens Connector are used to abstract token contract specifics from the FireFly Core.
"},{"location":"contributors/#reacttypescript-experience","title":"React/TypeScript experience","text":"If you want to do some frontend development, the FireFly UI is written in TypeScript and React.
"},{"location":"contributors/#go-and-blockchain-experience","title":"Go and blockchain experience","text":"If you already have some experience with blockchain and want to work on some backend components, the blockchain connectors, firefly-ethconnect (for Ethereum) and firefly-fabconnect (for Fabric) are great places to get involved.
"},{"location":"contributors/#make-changes","title":"\ud83d\udcdd Make changes","text":"To contribute to the repository, please fork the repository that you want to change. Then clone your fork locally on your machine and make your changes. As you commit your changes, push them to your fork. More information on making commits below.
"},{"location":"contributors/#commit-with-developer-certificate-of-origin","title":"\ud83d\udcd1 Commit with Developer Certificate of Origin","text":"As with all Hyperledger repositories, FireFly requires proper sign-off on every commit that is merged into the main branch. The sign-off indicates that you certify the changes you are submitting are in accordance with the Developer Certificate of Origin. To sign-off on your commit, you can use the -s flag when you commit changes.
git commit -s -m \"Your commit message\"\n This will add a string like this to the end of your commit message:
\"Signed-off-by: Your Name <your-email@address>\"\n NOTE: Sign-off is not the same thing as signing your commits with a private key. Both operations use a similar flag, which can be confusing. The one you want is the lowercase -s \ud83d\ude42
When you're ready to submit your changes for review, open a Pull Request back to the upstream repository. When you open your pull request, the maintainers will automatically be notified. Additionally, a series of automated checks will be performed on your code to make sure it passes certain repository specific requirements.
Maintainers may have suggestions on things to improve in your pull request. It is our goal to get code that is beneficial to the project merged as quickly as possible, so we don't like to leave pull requests hanging around for a long time. If the project maintainers are satisfied with the changes, they will approve and merge the pull request.
Thanks for your interest in collaborating on this project!
"},{"location":"contributors/#inclusivity","title":"Inclusivity","text":"The Hyperledger Foundation and the FireFly project are committed to fostering a community that is welcoming to all people. When participating in community discussions, contributing code, or documentaiton, please abide by the following guidelines:
This page details some of the more advanced options of the FireFly CLI
"},{"location":"contributors/advanced_cli_usage/#understanding-how-the-cli-uses-firefly-releases","title":"Understanding how the CLI uses FireFly releases","text":""},{"location":"contributors/advanced_cli_usage/#the-manifestjson-file","title":"Themanifest.json file","text":"FireFly has a manifest.json file in the root of the repo. This file contains a list of versions (both tag and sha) for each of the microservices that should be used with this specific commit.
Here is an example of what the manifest.json looks like:
{\n \"ethconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-ethconnect\",\n \"tag\": \"v3.0.4\",\n \"sha\": \"0b7ce0fb175b5910f401ff576ced809fe6f0b83894277c1cc86a73a2d61c6f41\"\n },\n \"fabconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-fabconnect\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"a79a4c66b0a2551d5122d019c15c6426e8cdadd6566ce3cbcb36e008fb7861ca\"\n },\n \"dataexchange-https\": {\n \"image\": \"ghcr.io/hyperledger/firefly-dataexchange-https\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"0de5b1db891a02871505ba5e0507821416d9fa93c96ccb4b1ba2fac45eb37214\"\n },\n \"tokens-erc1155\": {\n \"image\": \"ghcr.io/hyperledger/firefly-tokens-erc1155\",\n \"tag\": \"v0.9.0-20211019-01\",\n \"sha\": \"aabc6c483db408896838329dab5f4b9e3c16d1e9fa9fffdb7e1ff05b7b2bbdd4\"\n }\n}\n"},{"location":"contributors/advanced_cli_usage/#default-cli-behavior-for-releases","title":"Default CLI behavior for releases","text":"When creating a new stack, the CLI will by default, check the latest non-pre-release version of FireFly and look at its manifest.json file that was part of that commit. It will then use the Docker images referenced in that file to determine which images it should pull for the new stack. The specific image tag and sha is written to the docker-compose.yml file for that stack, so restarting or resetting a stack will never pull a newer image.
If you need to run some other version that is not the latest release of FireFly, you can tell the FireFly CLI which release to use by using the --release or -r flag. For example, to explicitly use v0.9.0 run this command to initialize the stack:
ff init -r v0.9.0\n"},{"location":"contributors/advanced_cli_usage/#running-an-unreleased-version-of-one-or-more-services","title":"Running an unreleased version of one or more services","text":"If you need to run an unreleased version of FireFly or one of its microservices, you can point the CLI to a specific manifest.json on your local disk. To do this, use the --manifest or -m flag. For example, if you have a file at ~/firefly/manifest.json:
ff init -m ~/firefly/manifest.json\n If you need to test a locally built docker image of a specific service, you'll want to edit the manifest.json before running ff init. Let's look at an example where we want to run a locally built version of fabconnect. The same steps apply to any of FireFly's microservices.
From the fabconnect project directory, build and tag a new Docker image:
docker build -t ghcr.io/hyperledger/firefly-fabconnect .\n"},{"location":"contributors/advanced_cli_usage/#edit-your-manifestjson-file","title":"Edit your manifest.json file","text":"Next, edit the fabconnect section of the manifest.json file. You'll want to remove the tag and sha and a \"local\": true field, so it looks like this:
...\n \"fabconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-fabconnect\",\n \"local\": true\n },\n...\n"},{"location":"contributors/advanced_cli_usage/#initialize-the-stack-with-the-custom-manifestjson-file","title":"Initialize the stack with the custom manifest.json file","text":" ff init local-test -b fabric -m ~/Code/hyperledger/firefly/manifest.json\n ff start local-test\n If you are iterating on changes locally, you can get the CLI to use an updated image by doing the following:
ff reset <stack_name> and ff start <stack_name> to reset the data, and use the newer imageYou may have noticed that FireFly core is actually not listed in the manifest.json file. If you want to run a locally built image of FireFly Core, you can follow the same steps above, but instead of editing an existing section in the file, we'll add a new one for FireFly.
From the firefly project directory, build and tag a new Docker image:
make docker\n"},{"location":"contributors/advanced_cli_usage/#initialize-the-stack-with-the-custom-manifestjson-file_1","title":"Initialize the stack with the custom manifest.json file","text":" ff init local-test -m ~/Code/hyperledger/firefly/manifest.json\n ff start local-test\n"},{"location":"contributors/code_hierarchy/","title":"FireFly Code Hierarchy","text":"Use the following diagram to better understand the hierarchy amongst the core FireFly components, plugins and utility frameworks:
\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 cmd \u251c\u2500\u2500\u2524 firefly [Ff]\u2502 - CLI entry point\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 - Creates parent context\n \u2502 \u2502 - Signal handling\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - HTTP listener (Gorilla mux)\n\u2502 internal \u251c\u2500\u2500\u2524 api [As]\u2502 * TLS (SSL), CORS configuration etc.\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 server \u2502 * WS upgrade on same port\n \u2502 \u2502 - REST route definitions\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Simple routing logic only, all processing deferred to orchestrator\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - REST route definition framework\n \u2502 openapi [Oa]\u2502 * Standardizes Body, Path, Query, Filter semantics\n \u2502 spec | - OpenAPI 3.0 (Swagger) generation\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Including Swagger. UI\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - WebSocket server\n \u2502 [Ws]\u2502 * Developer friendly JSON based protocol business app development\n \u2502 websockets \u2502 * Reliable sequenced delivery\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * _Event interface [Ei] supports lower level integration with other compute frameworks/transports_\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Extension point interface to listen for database change events\n \u2502 admin [Ae]\u2502 * For building microservice extensions to the core that run externally\n \u2502 events | * Used by the Transaction Manager component\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Filtering to specific object types\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Core data types\n \u2502 fftypes [Ft]\u2502 * Used for API and Serialization\n \u2502 \u2502 * APIs can mask fields on input via router definition\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Core runtime server. Initializes and owns instances of:\n \u2502 [Or]\u2502 * Components: Implement features\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2524 orchestrator \u2502 * Plugins: Pluggable infrastructure services\n \u2502 \u2502 \u2502 \u2502 - Exposes actions to router\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Processing starts here for all API calls\n \u2502 \u2502\n \u2502 Components: Components do the heavy lifting within the engine\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Integrates with Blockchain Smart Contract logic across blockchain technologies\n \u2502 \u251c\u2500\u2500\u2500\u2524 contract [Cm]\u2502 * Generates OpenAPI 3 / Swagger definitions for smart contracts, and propagates to network\n \u2502 \u2502 \u2502 manager \u2502 * Manages listeners for native Blockchain events, and routes those to application events\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Convert to/from native Blockchain interfaces (ABI etc.) and FireFly Interface [FFI] format\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Maintains a view of the entire network\n \u2502 \u251c\u2500\u2500\u2500\u2524 network [Nm]\u2502 * Integrates with network permissioning [NP] plugin\n \u2502 \u2502 \u2502 map \u2502 * Integrates with broadcast plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Handles hierarchy of member identity, node identity and signing identity\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Broadcast of data to all parties in the network\n \u2502 \u251c\u2500\u2500\u2500\u2524 broadcast [Bm]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 manager | * Integrates with shared storage interface [Ss] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Send private data to individual parties in the network\n \u2502 \u251c\u2500\u2500\u2500\u2524 private [Pm]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 messaging | * Integrates with the data exchange [Dx] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Messages can be pinned and sequenced via the blockchain, or just sent\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Groups of parties, with isolated data and/or blockchains\n \u2502 \u2502 \u2502 group [Gm]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 manager \u2502 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Private data management and validation\n \u2502 \u251c\u2500\u2500\u2500\u2524 data [Dm]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 manager \u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - JSON data schema management and validation (architecture extensible to XML and more)\n \u2502 \u2502 \u2502 json [Jv]\u2502 * JSON Schema validation logic for outbound and inbound messages\n \u2502 \u2502 \u2502 validator \u2502 * Schema propagation\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with broadcast plugin\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Binary data addressable via ID or Hash\n \u2502 \u2502 \u2502 blobstore [Bs]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 \u2502 * Hashes data, and maintains mapping to payload references in blob storage\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Download from shared storage\n \u2502 \u251c\u2500\u2500\u2500\u2524 shared [Sd]\u2502 * Parallel asynchronous download\n \u2502 \u2502 \u2502 download \u2502 * Resilient retry and crash recovery\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Notification to event aggregator on completion\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u251c\u2500\u2500\u2500\u2524 identity [Im] \u2502 - Central identity management service across components\n \u2502 \u2502 \u2502 manager \u2502 * Resolves API input identity + key combos (short names, formatting etc.)\n \u2502 \u2502 \u2502 \u2502 * Resolves registered on-chain signing keys back to identities\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with Blockchain Interface and pluggable Identity Interface (TBD)\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Keeps track of all operations performed against external components via plugins\n \u2502 \u251c\u2500\u2500\u2500\u2524 operation [Om]\u2502 * Updates database with inputs/outputs\n \u2502 \u2502 \u2502 manager \u2502 * Provides consistent retry semantics across plugins\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Private data management and validation\n \u2502 \u251c\u2500\u2500\u2500\u2524 event [Em]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 manager \u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Handles incoming external data\n \u2502 \u2502 \u2502 [Ag]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 aggregator \u2502 * Integrates with shared storage interface [Ss] plugin\n \u2502 \u2502 \u2502 \u2502 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2502 \u2502 - Ensures valid events are dispatched only once all data is available\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Context aware, to prevent block-the-world scenarios\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Subscription manager\n \u2502 \u2502 \u2502 [Sm]\u2502 * Creation and management of subscriptions\n \u2502 \u2502 \u2502 subscription \u2502 * Creation and management of subscriptions\n \u2502 \u2502 \u2502 manager \u2502 * Message to Event matching logic\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Manages delivery of events to connected applications\n \u2502 \u2502 \u2502 event [Ed]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 dispatcher \u2502 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Token creation/transfer initiation, indexing and coordination\n \u2502 \u251c\u2500\u2500\u2500\u2524 asset [Am]\u2502 * Fungible tokens: Digitized value/settlement (coins)\n \u2502 \u2502 \u2502 manager \u2502 * Non-fungible tokens: NFTs / globally uniqueness / digital twins\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Full indexing of transaction history\n \u2502 \u2502 [REST/WebSockets]\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\n \u2502 \u2502 \u2502 ERC-20 / ERC-721 \u251c\u2500\u2500\u2500\u2524 ERC-1155 \u251c\u2500\u2500\u2500\u2524 Simple framework for building token connectors\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u251c\u2500\u2500\u2500\u2524 sync / [Sa] \u2502 - Sync/Async Bridge\n \u2502 \u2502 \u2502 async bridge \u2502 * Provides synchronous request/reply APIs\n \u2502 \u2502 \u2502 \u2502 * Translates to underlying event-driven API\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Aggregates messages and data, with rolled up hashes for pinning\n \u2502 \u251c\u2500\u2500\u2500\u2524 batch [Ba]\u2502 * Pluggable dispatchers\n \u2502 \u2502 \u2502 manager \u2502 - Database decoupled from main-line API processing\n \u2502 \u2502 \u2502 \u2502 * See architecture diagrams for more info on active/active sequencing\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 - Manages creation of batch processor instances\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Short lived agent spun up to assemble batches on demand\n \u2502 \u2502 \u2502 batch [Bp]\u2502 * Coupled to an author+type of messages\n \u2502 \u2502 \u2502 processor \u2502 - Builds batches of 100s messages for efficient pinning\n \u2502 \u2502 \u2502 \u2502 * Aggregates messages and data, with rolled up hashes for pinning\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 - Shuts down automatically after a configurable inactivity period\n \u2502 ... more TBD\n \u2502\nPlugins: Each plugin comprises a Go shim, plus a remote agent microservice runtime (if required)\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Blockchain Interface\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 [Bi]\u2502 * Transaction submission - including signing key management\n \u2502 \u2502 blockchain \u2502 * Event listening\n \u2502 \u2502 interface \u2502 * Standardized operations, and custom on-chain coupling\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 ethereum \u2502 \u2502 fabric \u2502 \u2502 corda/cordapps \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 [REST/WebSockets]\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\n \u2502 \u2502 transaction manager [Tm] \u251c\u2500\u2500\u2500\u2524 Connector API [ffcapi] \u251c\u2500\u2500\u2500\u2524 Simple framework for building blockchain connectors\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Token interface\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 tokens [Ti]\u2502 * Standardizes core concepts: token pools, transfers, approvals\n \u2502 \u2502 interface \u2502 * Pluggable across token standards\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Supports simple implementation of custom token standards via microservice connector\n \u2502 [REST/WebSockets]\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\n \u2502 \u2502 ERC-20 / ERC-721 \u251c\u2500\u2500\u2500\u2524 ERC-1155 \u251c\u2500\u2500\u2500\u2524 Simple framework for building token connectors\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - P2P Content Addresssed Filesystem\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 shared [Si]\u2502 * Payload upload / download\n \u2502 \u2502 storage \u2502 * Payload reference management\n \u2502 \u2502 interface \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to any shared storage sytem, accessible to all members\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 ipfs \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Private Data Exchange\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 data [Dx]\u2502 * Blob storage\n \u2502 \u2502 exchange \u2502 * Private secure messaging\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Secure file transfer\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to any private data exchange tech\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 https / MTLS \u2502 \u2502 Kaleido \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - API Authentication and Authorization Interface\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 api auth [Aa]\u2502 * Authenticates security credentials (OpenID Connect id token JWTs etc.)\n \u2502 \u2502 \u2502 * Extracts API/user identity (for identity interface to map)\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Enforcement point for fine grained API access control\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible other single sign-on technologies\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 apikey \u2502 \u2502 jwt \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Database Interactions\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 database [Di]\u2502 * Create, Read, Update, Delete (CRUD) actions\n \u2502 \u2502 interace \u2502 * Filtering and update definition interace\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Migrations and Indexes\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to NoSQL (CouchDB / MongoDB etc.)\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 sqlcommon \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible other SQL databases\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 postgres \u2502 \u2502 sqlite3 \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Connects the core event engine to external frameworks and applications\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 event [Ei]\u2502 * Supports long-lived (durable) and ephemeral event subscriptions\n \u2502 \u2502 interface \u2502 * Batching, filtering, all handled in core prior to transport\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Interface supports connect-in (websocket) and connect-out (broker runtime style) plugins\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to additional event buses (Kafka, NATS, AMQP etc.)\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 websockets \u2502 \u2502 webhooks \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 ... more TBD\n\n Additional utility framworks\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - REST API client\n \u2502 rest [Re]\u2502 * Provides convenience and logging\n \u2502 client \u2502 * Standardizes auth, config and retry logic\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Built on Resty\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - WebSocket client\n \u2502 wsclient [Wc]\u2502 * Provides convenience and logging\n \u2502 \u2502 * Standardizes auth, config and reconnect logic\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Built on Gorilla WebSockets\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Translation framework\n \u2502 i18n [In]\u2502 * Every translations must be added to `en_translations.json` - with an `FF10101` key\n \u2502 \u2502 * Errors are wrapped, providing extra features from the `errors` package (stack etc.)\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Description translations also supported, such as OpenAPI description\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Logging framework\n \u2502 log [Lo]\u2502 * Logging framework (logrus) integrated with context based tagging\n \u2502 \u2502 * Context is used throughout the code to pass API invocation context, and logging context\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Example: Every API call has an ID that can be traced, as well as a timeout\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Configuration\n \u2502 config [Co]\u2502 * File and Environment Variable based logging framework (viper)\n \u2502 \u2502 * Primary config keys all defined centrally\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Plugins integrate by returning their config structure for unmarshaling (JSON tags)\n"},{"location":"contributors/code_overview/","title":"FireFly Code Overview","text":""},{"location":"contributors/code_overview/#developer-intro","title":"Developer Intro","text":"FireFly is a second generation implementation re-engineered from the ground up to improve developer experience, runtime performance, and extensibility.
This means a simplified REST/WebSocket programming model for app development, and a wider range of infrastructure options for deployment.
It also means a focus on an architecture and code structure for a vibrant open source community.
A few highlights:
Asset, Data, Message, Event, Topic, TransactionAdded flexibility, with simplified the developer experience:
Versioning of data definitions
Context construct link related events into a single sequence## Directories
FireFly has a plugin based architecture design, with a microservice runtime footprint. As such there are a number of repos, and the list will grow as the community evolves.
But not to worry, one of those repos is a CLI designed to get you running with all the components you need in minutes!
Note only the projects that are primarily built to support FireFly are listed here, not all of the ecosystem of projects that integrate underneath the plugins.
"},{"location":"contributors/dev_environment_setup/","title":"Setting up a FireFly Core Development Environment","text":"This guide will walk you through setting up your machine for contributing to FireFly, specifically the FireFly core.
"},{"location":"contributors/dev_environment_setup/#dependencies","title":"Dependencies","text":"You will need a few prerequisites set up on your machine before you can build FireFly from source. We recommend doing development on macOS, Linux, or WSL 2.0.
The first step to setting up a local development environment is to install the FireFly CLI. Please section of the Getting Started Guide to install The FireFly CLI.
"},{"location":"contributors/dev_environment_setup/#installing-go-and-setting-up-your-gopath","title":"Installing Go and setting up yourGOPATH","text":"We recommend following the instructions on golang.org to install Go, rather than installing Go from another package magager such as brew. Although it is possible to install Go any way you'd like, setting up your GOPATH may differ from the following instructions.
After installing Go, you will need to add a few environment variables to your shell run commands file. This is usually a hidden file in your home directory called .bashrc or .zshrc, depending on which shell you're using.
Add the following lines to your .bashrc or .zshrc file:
export GOPATH=$HOME/go\nexport GOROOT=\"/usr/local/go\"\nexport PATH=\"$PATH:${GOPATH}/bin:${GOROOT}/bin\"\n"},{"location":"contributors/dev_environment_setup/#building-firefly","title":"Building FireFly","text":"After installing dependencies, building FireFly from source is very easy. Just clone the repo:
git clone git@github.com:hyperledger/firefly.git && cd firefly\n And run the Makefile to run tests, and compile the app
make\n If you want to install the binary on your path (assuming your Go Home is already on your path), from inside the project directory you can simply run:
go install\n"},{"location":"contributors/dev_environment_setup/#install-the-cli","title":"Install the CLI","text":"Please check the CLI Installation instructions for the best way to install the CLI on your machine: https://github.com/hyperledger/firefly-cli#install-the-cli
"},{"location":"contributors/dev_environment_setup/#set-up-a-development-stack","title":"Set up a development stack","text":"Now that you have both FireFly and the FireFly CLI installed, it's time to create a development stack. The CLI can be used to create a docker-compose environment that runs the entirety of a FireFly network. This will include several different processes for each member of the network. This is very useful for people that want to build apps that use FireFly's API. It can also be useful if you want to make changes to FireFly itself, however we need to set up the stack slightly differently in that case.
Essentially what we are going to do is have docker-compose run everything in the FireFly network except one FireFly core process. We'll run this FireFly core process on our host machine, and configure it to connect to the rest of the microservices running in docker-compose. This means we could launch FireFly from Visual Studio Code or some other IDE and use a debugger to see what's going on inside FireFly as it's running.
We'll call this stack dev. We're also going to add --external 1 to the end of our command to create the new stack:
ff init dev --external 1\n This tells the CLI that we want to manage one of the FireFly core processes outside the docker-compose stack. For convenience, the CLI will still generate a config file for this process though.
"},{"location":"contributors/dev_environment_setup/#start-the-stack","title":"Start the stack","text":"To start your new stack simply run:
ff start dev\n At a certain point in the startup process, the CLI will pause and wait for up to two minutes for you to start the other FireFly node. There are two different ways you can run the external FireFly core process.
"},{"location":"contributors/dev_environment_setup/#1-from-another-terminal","title":"1) From another terminal","text":"The CLI will print out the command line which can be copied and pasted into another terminal window to run FireFly. This command should be run from the firefly core project directory. Here is an example of the command that the CLI will tell you to run:
firefly -f ~/.firefly/stacks/dev/runtime/config/firefly_core_0.yml\n NOTE: The first time you run FireFly with a fresh database, it will need a directory of database migrations to apply to the empty database. If you run FireFly from the firefly project directory you cloned from GitHub, it will automatically find these and apply them. If you run it from some other directory, you will have to point FireFly to the migrations on your own.
If you named your stack dev there is a launch.json file for Visual Studio code already in the project directory. If you have the project open in Visual Studio Code, you can either press the F5 key to run it, or go to the \"Run and Debug\" view in Visual Studio code, and click \"Run FireFly Core\".
Now you should have a full FireFly stack up and running, and be able to debug FireFly using your IDE. Happy hacking!
NOTE: Because firefly-ui is a separate repo, unless you also start a UI dev server for the external FireFly core, the default UI path will not load. This is expected, and if you're just working on FireFly core itself, you don't need to worry about it.`
Refer to Advanced CLI Usage.
"},{"location":"contributors/docs_setup/","title":"Contributing to Documentation","text":"This guide will walk you through setting up your machine for contributing to FireFly documentation. Documentation contributions are extremely valuable. If you discover something is missing in the docs, we would love to include your additions or clarifications to help the next person who has the same question.
This doc site is generated by a set of Markdown files in the main FireFly repository, under the ./doc-site directory. You can browse the source for the current live site in GitHub here: https://github.com/hyperledger/firefly/tree/main/doc-site
The process for updating the documentation is really easy! You'll follow the same basic steps outlined in the same steps outlined in the Contributor's guide. Here are the detailed steps for contributing to the docs:
git commit -s!This FireFly documentation site is based on the Hyperledger documentation template. The template utilizes MkDocs (documentation at mkdocs.org) and the theme Material for MkDocs (documentation at Material for MkDocs). Material adds a number of extra features to MkDocs, and Hyperledger repositories can take advantage of the theme's Insiders capabilities.
"},{"location":"contributors/docs_setup/#prerequisites","title":"Prerequisites","text":"To test the documents and update the published site, the following tools are needed:
mkdocs.yml file and needed for deploying the site to gh-pages.git can be installed locally, as described in the Install Git Guide from GitHub.
Python 3 can be installed locally, as described in the Python Getting Started guide.
It is recommended to install your Python dependencies in a virtual environment in case you have other conflicting Python installations on your machine. This also removes the need to install these packages globally on your computer.
cd doc-site\npython3 -m venv venv\nsource venv/bin/activate\n"},{"location":"contributors/docs_setup/#mkdocs","title":"Mkdocs","text":"The Mkdocs-related items can be installed locally, as described in the Material for Mkdocs installation instructions. The short, case-specific version of those instructions follow:
pip install -r requirements.txt\n"},{"location":"contributors/docs_setup/#verify-setup","title":"Verify Setup","text":"To verify your setup, check that you can run mkdocs by running the command mkdocs --help to see the help text.
The commands you will usually use with mkdocs are:
mkdocs serve - Start the live-reloading docs server.mkdocs build - Build the documentation site.mkdocs -h - Print help message and exit.mkdocs.yml # The configuration file.\ndocs/\n index.md # The documentation homepage.\n SUMMARY.md # The main left nav\n ... # Other markdown pages, images and other files.\n"},{"location":"contributors/release_guide/","title":"Release Guide","text":"This guide will walk you through creating a release.
"},{"location":"contributors/release_guide/#versioning-scheme","title":"Versioning scheme","text":"FireFly follows semantic versioning. For more details on how we determine which version to use please see the Versioning Scheme guide.
"},{"location":"contributors/release_guide/#the-manifestjson-file","title":"Themanifest.json file","text":"FireFly has a manifest.json file in the root of the repo. This file contains a list of versions (both tag and sha) for each of the microservices that should be used with this specific commit. If you need FireFly to use a newer version of a microservice listed in this file, you should update the manifest.json file, commit it, and include it in your PR. This will trigger an end-to-end test run, using the specified versions.
Here is an example of what the manifest.json looks like:
{\n \"ethconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-ethconnect\",\n \"tag\": \"v3.0.4\",\n \"sha\": \"0b7ce0fb175b5910f401ff576ced809fe6f0b83894277c1cc86a73a2d61c6f41\"\n },\n \"fabconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-fabconnect\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"a79a4c66b0a2551d5122d019c15c6426e8cdadd6566ce3cbcb36e008fb7861ca\"\n },\n \"dataexchange-https\": {\n \"image\": \"ghcr.io/hyperledger/firefly-dataexchange-https\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"0de5b1db891a02871505ba5e0507821416d9fa93c96ccb4b1ba2fac45eb37214\"\n },\n \"tokens-erc1155\": {\n \"image\": \"ghcr.io/hyperledger/firefly-tokens-erc1155\",\n \"tag\": \"v0.9.0-20211019-01\",\n \"sha\": \"aabc6c483db408896838329dab5f4b9e3c16d1e9fa9fffdb7e1ff05b7b2bbdd4\"\n }\n}\n NOTE: You can run make manifest in the FireFly core source directory, and a script will run to automatically get the latests non-pre-release version of each of FireFly's microservices. If you need to use a snapshot or pre-release version you should edit manifest.json file manually, as this script will not fetch those versions.
Releases and builds are managed by GitHub. New binaries and/or Docker images will automatically be created when a new release is published. The easiest way to create a release is through the web UI for the repo that you wish to release.
"},{"location":"contributors/release_guide/#1-navigate-to-the-release-page-for-the-repo","title":"1) Navigate to the release page for the repo","text":""},{"location":"contributors/release_guide/#2-click-the-draft-a-new-release-button","title":"2) Click theDraft a new release button","text":""},{"location":"contributors/release_guide/#3-fill-out-the-form-for-your-release","title":"3) Fill out the form for your release","text":"It is recommended to start with the auto-generated release notes. Additional notes can be added as-needed.
"},{"location":"contributors/release_guide/#automatic-docker-builds","title":"Automatic Docker builds","text":"After cutting a new release, a GitHub Action will automatically start a new Docker build, if the repo has a Docker image associated with it. You can check the status of the build by clicking the \"Actions\" tab along the top of the page, for that repo.
"},{"location":"contributors/version_scheme/","title":"Versioning Scheme","text":"This page describes FireFly's versioning scheme
"},{"location":"contributors/version_scheme/#semantic-versioning","title":"Semantic versioning","text":"FireFly follows semantic versioning. In summary, this means:
Given a version number MAJOR.MINOR.PATCH, increment the:
When creating a new release, the release name and tag should be the semantic version should be prefixed with a v . For example, a certain release name/tag could be v0.9.0.
For pre-release versions for testing, we append a date and index to the end of the most recently released version. For example, if we needed to create a pre-release based on v0.9.0 and today's date is October 22, 2021, the version name/tag would be: v0.9.0-20211022-01. If for some reason you needed to create another pre-release version in the same day (hey, stuff happens), the name/tag for that one would be v0.9.0-20211022-02.
For pre-releases that are candidates to become a new major or minor release, the release name/tag will be based on the release that the candidate will become (as opposed to the test releases above, which are based on the previous release). For example, if the current latest release is v0.9.0 but we want to create an alpha release for 1.0, the release name/tag would be v1.0.0-alpha.1.
Find answers to the most commonly asked FireFly questions.
"},{"location":"faqs/#how-does-firefly-enable-multi-chain-applications","title":"How does FireFly enable multi-chain applications?","text":"It's best to think about FireFly as a rich orchestration layer that sits one layer above the blockchain. FireFly helps to abstract away much of the complex blockchain functionality (such as data exchange, private messaging, common token functionality, etc) in a loosely coupled microservice architecture with highly pluggable components. This enables application developers to focus on building innovative Web3 applications.
There aren't any out of the box bridges to connect two separate chains together, but with a collection of FireFly instances across a consortium, FireFly could help listen for events on Blockchain A and take an action on Blockchain B when certain conditions are met.
"},{"location":"faqs/#how-do-i-deploy-smart-contracts","title":"\ud83d\udcdc How do I deploy smart contracts?","text":"The recommended way to deploy smart contracts on Ethereum chains is by using FireFly's built in API. For a step by step example of how to do this you can refer to the Smart Contract Tutorial for Ethereum based chains.
For Fabric networks, please refer to the Fabric chaincode lifecycle docs for detailed instructions on how to deploy and manage Fabric chaincode.
"},{"location":"faqs/#can-i-connect-firefly-to-metamask","title":"\ud83e\udd8a Can I connect FireFly to MetaMask?","text":"Yes! Before you set up MetaMask you'll likely want to create some tokens that you can use to send between wallets on your FF network. Go to the tokens tab in your FireFly node's UI, create a token pool, and then mint some tokens. Once you've done this, follow the steps listed here to set up MetaMask on your network.
"},{"location":"faqs/#connect-with-us-on-discord","title":"\ud83d\ude80 Connect with us on Discord","text":"If your question isn't answered here or if you have immediate questions please don't hesitate to reach out to us on Discord in the firefly channel:
If you're new to FireFly, this is the perfect place to start! With the FireFly CLI and the FireFly Sandbox it's really easy to get started building powerful blockchain apps. Just follow along with the steps below and you'll be up and running in no time!
"},{"location":"gettingstarted/#what-you-will-accomplish-with-this-guide","title":"What you will accomplish with this guide","text":"
With this easy-to-follow guide, you'll go from \"zero\" to blockchain-hero in the time it takes to drink a single cup of coffee. It will walk you through setting up your machine, all the way through sending your first blockchain transactions using the FireFly Sandbox.
"},{"location":"gettingstarted/#were-here-to-help","title":"We're here to help!","text":"We want to make it as easy as possible for anyone to get started with FireFly, and we don't want anyone to feel like they're stuck. If you're having trouble, or are just curious about what else you can do with FireFly we encourage you to join the Hyperledger Discord server and come chat with us in the #firefly channel.
"},{"location":"gettingstarted/#get-started-install-the-firefly-cli","title":"Get started: Install the FireFly CLI","text":"Now that you've got the FireFly CLI set up on your machine, the next step is to create and start a FireFly stack.
\u2460 Install the FireFly CLI \u2192
"},{"location":"gettingstarted/firefly_cli/","title":"Install the FireFly CLI","text":""},{"location":"gettingstarted/firefly_cli/#prerequisites","title":"Prerequisites","text":"In order to run the FireFly CLI, you will need a few things installed on your dev machine:
NOTE: For Linux users, it is recommended that you add your user to the docker group so that you do not have to run ff or docker as root or with sudo. For more information about Docker permissions on Linux, please see Docker's documentation on the topic.
NOTE: For Windows users, we recommend that you use Windows Subsystem for Linux 2 (WSL2). Binaries provided for Linux will work in this environment.
"},{"location":"gettingstarted/firefly_cli/#install-the-cli","title":"Install the CLI","text":"There are several ways to install the FireFly CLI. The easiest way to get up and running with the FireFly CLI is to download a pre-compiled binary of the latest release.
"},{"location":"gettingstarted/firefly_cli/#install-via-binary-package-download","title":"Install via Binary Package Download","text":"Download the package for your OS by navigating to the latest release page and downloading the appropriate package for your OS and architecture.
"},{"location":"gettingstarted/firefly_cli/#unpack-and-install-the-binary","title":"Unpack and Install the Binary","text":"Assuming you downloaded the package from GitHub into your Downloads directory, run the following command to extract the binary and move it to your system path:
sudo tar -zxf ~/Downloads/firefly-cli_*.tar.gz -C /usr/local/bin ff && rm ~/Downloads/firefly-cli_*.tar.gz\n If you downloaded the package into a different directory, adjust the command to point to the correct location of the firefly-cli_*.tar.gz file.
NOTE: On recent versions of macOS, default security settings will prevent the FireFly CLI binary from running, because it was downloaded from the internet. You will need to allow the FireFly CLI in System Preferences, before it will run.
"},{"location":"gettingstarted/firefly_cli/#install-via-homebrew-macos","title":"Install via Homebrew (macOS)","text":"You can also install the FireFly CLI using Homebrew:
brew install firefly\n"},{"location":"gettingstarted/firefly_cli/#alternative-installation-method-install-via-go","title":"Alternative installation method: Install via Go","text":"If you have a local Go development environment, and you have included ${GOPATH}/bin in your path, you could also use Go to install the FireFly CLI by running:
go install github.com/hyperledger/firefly-cli/ff@latest\n"},{"location":"gettingstarted/firefly_cli/#verify-the-installation","title":"Verify the installation","text":"After using either installation method above, you can verify that the CLI is successfully installed by running ff version. This should print the current version like this:
{\n \"Version\": \"v0.0.47\",\n \"License\": \"Apache-2.0\"\n}\n"},{"location":"gettingstarted/firefly_cli/#next-steps-start-your-environment","title":"Next steps: Start your environment","text":"Now that you've got the FireFly CLI set up on your machine, the next step is to create and start a FireFly stack.
\u2461 Start your environment \u2192
"},{"location":"gettingstarted/sandbox/","title":"Use the Sandbox","text":""},{"location":"gettingstarted/sandbox/#previous-steps-start-your-environment","title":"Previous steps: Start your environment","text":"If you haven't started a FireFly stack already, please go back to the previous step and read the guide on how to Start your environment.
\u2190 \u2461 Start your environment
Now that you have a full network of three Supernodes running on your machine, let's look at the first two components that you will interact with: the FireFly Sandbox and the FireFly Explorer.
"},{"location":"gettingstarted/sandbox/#video-walkthrough","title":"Video walkthrough","text":"This video is a walkthrough of the FireFly Sandbox and FireFly Explorer from the FireFly 1.0 launch webinar. At this point you should be able to follow along and try all these same things on your own machine.
"},{"location":"gettingstarted/sandbox/#open-the-firefly-sandbox-for-the-first-member","title":"Open the FireFly Sandbox for the first member","text":"When you set up your FireFly stack in the previous section, it should have printed some URLs like the following. Open the link in a browser for the `Sandbox UI for member '0'. It should be: http://127.0.0.1:5109
ff start dev\nthis will take a few seconds longer since this is the first time you're running this stack...\ndone\n\nWeb UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\nWeb UI for member '1': http://127.0.0.1:5001/ui\nSandbox UI for member '1': http://127.0.0.1:5209\n\nWeb UI for member '2': http://127.0.0.1:5002/ui\nSandbox UI for member '2': http://127.0.0.1:5309\n\n\nTo see logs for your stack run:\n\nff logs dev\n"},{"location":"gettingstarted/sandbox/#sandbox-layout","title":"Sandbox Layout","text":"The Sandbox is split up into three columns:
"},{"location":"gettingstarted/sandbox/#left-column-prepare-your-request","title":"Left column: Prepare your request","text":"On the left-hand side of the page, you can fill out simple form fields to construct messages and more. Some tabs have more types of requests on them in sections that can be expanded or collapsed. Across the top of this column there are three tabs that switch between the three main sets of functionality in the Sandbox. The next three sections of this guide will walk you through each one of these.
The first tab we will explore is the MESSAGING tab. This is where we can send broadcasts and private messages.
"},{"location":"gettingstarted/sandbox/#middle-column-preview-server-code-and-see-response","title":"Middle column: Preview server code and see response","text":"As you type in the form on the left side of the page, you may notice that the source code in the top middle of the page updates automatically. If you were building a backend app, this is an example of code that your app could use to call the FireFly SDK. The middle column also contains a RUN button to actually send the request.
On the right-hand side of the page you can see a stream of events being received on a WebSocket connection that the backend has open to FireFly. For example, as you make requests to send messages, you can see when the messages are asynchronously confirmed.
"},{"location":"gettingstarted/sandbox/#messages","title":"Messages","text":"The Messages tab is where we can send broadcast and private messages to other members and nodes in the FireFly network. Messages can be a string, any arbitrary JSON object, or a binary file. For more details, please see the tutorial on Broadcasting data and Privately sending data.
"},{"location":"gettingstarted/sandbox/#things-to-try-out","title":"Things to try out","text":"The Tokens tab is where you can create token pools, and mint, burn, or transfer tokens. This works with both fungible and non-fungible tokens (NFTs). For more details, please see the Tokens tutorials.
"},{"location":"gettingstarted/sandbox/#things-to-try-out_1","title":"Things to try out","text":"The Contracts section of the Sandbox lets you interact with custom smart contracts, right from your web browser! The Sandbox also provides some helpful tips on deploying your smart contract to the blockchain. For more details, please see the tutorial on Working with custom smart contracts.
"},{"location":"gettingstarted/sandbox/#things-to-try-out_2","title":"Things to try out","text":"At this point you should have a pretty good understanding of some of the major features of Hyperledger FireFly. Now, using what you've learned, you can go and build your own Web3 app! Don't forget to join the Hyperledger Discord server and come chat with us in the #firefly channel.
"},{"location":"gettingstarted/setup_env/","title":"Start your environment","text":""},{"location":"gettingstarted/setup_env/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the previous step and read the guide on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
Now that you have the FireFly CLI installed, you are ready to run some Supernodes on your machine!
"},{"location":"gettingstarted/setup_env/#a-firefly-stack","title":"A FireFly Stack","text":"A FireFly stack is a collection of Supernodes with networking and configuration that are designed to work together on a single development machine. A stack has multiple members (also referred to organizations). Each member has their own Supernode within the stack. This allows developers to build and test data flows with a mix of public and private data between various parties, all within a single development environment.
The stack also contains an instance of the FireFly Sandbox for each member. This is an example of an end-user application that uses FireFly's API. It has a backend and a frontend which are designed to walk developers through the features of FireFly, and provides code snippets as examples of how to build those features into their own application. The next section in this guide will walk you through using the Sandbox.
"},{"location":"gettingstarted/setup_env/#system-resources","title":"System Resources","text":"The FireFly stack will run in a docker-compose project. For systems that run Docker containers inside a virtual machine, like macOS, you need to make sure that you've allocated enough memory to the Docker virtual machine. We recommend allocating 1GB per member. In this case, we're going to set up a stack with 3 members, so please make sure you have at least 3 GB of RAM allocated in your Docker Desktop settings.
It's really easy to create a new FireFly stack. The ff init command can create a new stack for you, and will prompt you for a few details such as the name, and how many members you want in your stack.
To create an Ethereum based stack, run:
ff init ethereum\n To create an Fabric based stack, run:
ff init fabric\n Choose a stack name. For this guide, I will choose the name dev, but you can pick whatever you want:
stack name: dev\n Chose the number of members for your stack. For this guide, we should pick 3 members, so we can try out both public and private messaging use cases:
number of members: 3\n "},{"location":"gettingstarted/setup_env/#stack-initialization-options","title":"Stack initialization options","text":"There are quite a few options that you can choose from when creating a new stack. For now, we'll just stick with the defaults. To see the full list of Ethereum options, just run ff init ethereum --help or to see the full list of Fabric options run ff init fabric --help
ff init ethereum --help\nCreate a new FireFly local dev stack using an Ethereum blockchain\n\nUsage:\n ff init ethereum [stack_name] [member_count] [flags]\n\nFlags:\n --block-period int Block period in seconds. Default is variable based on selected blockchain provider. (default -1)\n -c, --blockchain-connector string Blockchain connector to use. Options are: [evmconnect ethconnect] (default \"evmconnect\")\n -n, --blockchain-node string Blockchain node type to use. Options are: [geth besu remote-rpc] (default \"geth\")\n --chain-id int The chain ID - also used as the network ID (default 2021)\n --contract-address string Do not automatically deploy a contract, instead use a pre-configured address\n -h, --help help for ethereum\n --remote-node-url string For cases where the node is pre-existing and running remotely\n\nGlobal Flags:\n --ansi string control when to print ANSI control characters (\"never\"|\"always\"|\"auto\") (default \"auto\")\n --channel string Select the FireFly release channel to use. Options are: [stable head alpha beta rc] (default \"stable\")\n --connector-config string The path to a yaml file containing extra config for the blockchain connector\n --core-config string The path to a yaml file containing extra config for FireFly Core\n -d, --database string Database type to use. Options are: [sqlite3 postgres] (default \"sqlite3\")\n -e, --external int Manage a number of FireFly core processes outside of the docker-compose stack - useful for development and debugging\n -p, --firefly-base-port int Mapped port base of FireFly core API (1 added for each member) (default 5000)\n --ipfs-mode string Set the mode in which IPFS operates. Options are: [private public] (default \"private\")\n -m, --manifest string Path to a manifest.json file containing the versions of each FireFly microservice to use. Overrides the --release flag.\n --multiparty Enable or disable multiparty mode (default true)\n --node-name stringArray Node name\n --org-name stringArray Organization name\n --prometheus-enabled Enables Prometheus metrics exposition and aggregation to a shared Prometheus server\n --prometheus-port int Port for the shared Prometheus server (default 9090)\n --prompt-names Prompt for org and node names instead of using the defaults\n -r, --release string Select the FireFly release version to use. Options are: [stable head alpha beta rc] (default \"latest\")\n --request-timeout int Custom request timeout (in seconds) - useful for registration to public chains\n --sandbox-enabled Enables the FireFly Sandbox to be started with your FireFly stack (default true)\n -s, --services-base-port int Mapped port base of services (100 added for each member) (default 5100)\n -t, --token-providers stringArray Token providers to use. Options are: [none erc1155 erc20_erc721] (default [erc20_erc721])\n -v, --verbose verbose log output\n"},{"location":"gettingstarted/setup_env/#start-your-stack","title":"Start your stack","text":"To start your stack simply run:
ff start dev\n This may take a minute or two and in the background the FireFly CLI will do the following for you:
BatchPin smart contractERC-1155 token smart contractNOTE: For macOS users, the default port (5000) is already in-use by ControlCe service (AirPlay Receiver). You can either disable this service in your environment, or use a different port when creating your stack (e.g. ff init dev -p 8000)
After your stack finishes starting it will print out the links to each member's UI and the Sandbox for that node:
ff start dev\nthis will take a few seconds longer since this is the first time you're running this stack...\ndone\n\nWeb UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\nWeb UI for member '1': http://127.0.0.1:5001/ui\nSandbox UI for member '1': http://127.0.0.1:5209\n\nWeb UI for member '2': http://127.0.0.1:5002/ui\nSandbox UI for member '2': http://127.0.0.1:5309\n\n\nTo see logs for your stack run:\n\nff logs dev\n"},{"location":"gettingstarted/setup_env/#next-steps-use-in-the-sandbox","title":"Next steps: Use in the Sandbox","text":"Now that you have some Supernodes running, it's time to start playing: in the Sandbox!
\u2462 Use the Sandbox \u2192
"},{"location":"overview/gateway_features/","title":"Web3 Gateway Features","text":"Web3 Gateway features allow your FireFly Supernode to connect to any blockchain ecosystem, public or private. When a chain is connected, the FireFly Supernode may invoke custom smart contracts, interact with tokens, and monitor transactions. A single FireFly Supernode is able to have multiple namespaces, or isolated environments, where each namespace is a connection to a different chain.
"},{"location":"overview/gateway_features/#transfer-tokenized-value","title":"Transfer tokenized value","text":"The Digital Asset Features allow you to connect to token economies, in multiple blockchains, using the same infrastructure and signing keys.
The complexities of how each token works, and how each blockchain works, are abstracted away from you by the Hyperledger FireFly Connector Framework.
All of the layers of plumbing required to execute a transaction exactly once on a blockchain, and tracking it through to completion, are part of the stack. Deploy and configure them once in your Web3 gateway, and use them for multiple use cases in your enterprise.
"},{"location":"overview/gateway_features/#invoke-any-other-type-of-smart-contract","title":"Invoke any other type of smart contract","text":"The API Generation features of Hyperledger FireFly, allow you to generate a convenient and reliable REST API for any smart contract logic.
Then you just invoke that contract like you would any other API, with all the features you would expect like an OpenAPI 3.0 specification for the API, and UI explorer.
The same reliable transaction submission framework is used as for token transfers, and you can use Hyperledger FireFly as a high volume staging post for those transactions.
For EVM based chains, these features were significantly enhanced in the new EVMConnect connector introduced in v1.1 of FireFly (superseding EthConnect).
"},{"location":"overview/gateway_features/#index-data-from-the-blockchain","title":"Index data from the blockchain","text":"Blockchain nodes are not designed for efficient querying of historical information. Instead their core function is to provide an ordered ledger of actions+events, along with a consistent world state at any point in time.
This means that almost all user experiences and business APIs need a separate data store, that provides an fast indexed view of the history and current state of the chain.
As an example, you've probably looked at a Block Explorer for a public blockchain on the web. Well, you weren't looking directly at the blockchain node. You were querying an off-chain indexed database, of all the blocks and transactions on that chain. An indexer behind the scenes was listening to the blockchain and synchronizing the off-chain state.
Hyperledger FireFly has a built-in indexer for tokens, that maps every token mint/burn/transfer/approve operation that happens on the the blockchain into the database for fast query. You just specify which tokens you're interested in, and FireFly takes care of the rest.
Additionally, FireFly does the heavy lifting part of indexing for all other types of smart contract event that might occur. It scrapes the blockchain for the events, formats them into easy to consume JSON, and reliably delivers them to your application.
So your application just needs a small bit of code to take those payloads, and insert them into the database with the right database indexes you need to query your data by.
"},{"location":"overview/gateway_features/#reliably-trigger-events-in-your-applications","title":"Reliably trigger events in your applications","text":"One of the most important universal rules about Web3 applications, is that they are event-driven.
No one party in the system can chose to change the state, instead they must submit transactions that get ordered against everyone else's transactions, and only once confirmed through the consensus algorithm are they actioned.
This means the integration into your application and core systems needs to be event-driven too.
The same features that support reliable indexing of the blockchain data, allow reliable triggering of application code, business workflows, and core system integrations.
Learn more about the FireFly Event Bus
"},{"location":"overview/gateway_features/#manage-decentralized-data-nfts-etc","title":"Manage decentralized data (NFTs etc.)","text":"Your blockchain transactions are likely to refer to data that is stored off-chain.
One common example is non-fungible-token (NFT) metadata, images and documents. These are not a good fit for storing directly in any blockchain ledger, so complimentary decentralized technologies like the InterPlanetary File System (IPFS) are used to make the data widely available and resilient outside of the blockchain itself.
As a publisher or consumer of such metadata from decentralized storage, you need to be confident you have your own copy safe. So just like with the blockchain data, Hyperledger FireFly can act as a staging post for this data.
Structured JSON data can be stored, uploaded and downloaded from the FireFly database.
Large image/document/video payloads are handled by the pluggable Data Exchange microservice, which allows you to attach local or cloud storage to manage your copy of the data.
FireFly then provides a standardized API to allow publishing of this data. So configuring a reliable gateway to the decentralized storage tier can be done once, and then accessed from your applications via a single Web3 Gateway.
"},{"location":"overview/gateway_features/#maintain-a-private-address-book","title":"Maintain a private address book","text":"You need to manage your signing keys, and know the signing keys of others you are transacting with. A blockchain address like 0x0742e81393ee79C768e84cF57F1bF314F0f31ECe is not very helpful for this.
So Hyperledger FireFly provides a pluggable identity system, built on the foundation of the Decentralized IDentifier (DID). When in Web3 Gateway Mode these identities are not shared or published, and simply provide you a local address book.
You can associate profile information with the identities, for example correlating them to the identifiers in your own core systems - such as an Identity and Access Management (IAM) system, or Know Your Customer (KYC) database.
Learn more about Hyperledger FireFly Identities
"},{"location":"overview/public_vs_permissioned/","title":"Public and Permissioned Blockchain","text":""},{"location":"overview/public_vs_permissioned/#public-and-permissioned-blockchains","title":"Public and Permissioned Blockchains","text":"A separate choice to the technology for your blockchain, is what combination of blockchain ecosystems you will integrate with.
There are a huge variety of options, and increasingly you might find yourself integrating with multiple ecosystems in your solutions.
A rough (and incomplete) high level classification of the blockchains available is as follows:
The lines are blurring between these categorizations as the technologies and ecosystems evolve.
"},{"location":"overview/public_vs_permissioned/#public-blockchain-variations","title":"Public blockchain variations","text":"For the public Layer 1 and 2 solutions, there are too many subclassifications to go into in detail here:
The thing most consistent across public blockchain technologies, is that the technical decisions are backed by token economics.
Put simply, creating a system where it's more financially rewarding to behave honestly, than it is to subvert and cheat the system.
This means that participation costs, and that the mechanisms needed to reliably get your transactions into these systems are complex. Also that the time it might take to get a transaction onto the chain can be much longer than for a permissioned blockchain, with the potential to have to make a number of adjustments/resubmissions.
The choice of whether to run your own node, or use a managed API, to access these blockchain ecosystems is also a factor in the behavior of the transaction submission and event streaming.
"},{"location":"overview/public_vs_permissioned/#firefly-architecture-for-public-chains","title":"FireFly architecture for public chains","text":"One of the fastest evolving aspects of the Hyperledger FireFly ecosystem, is how it facilitates enterprises to participate in these.
The architecture is summarized as follows:
operation resource within FireFly Core to store and update statenonce assignmentffcapi)This evolution involves a significant refactoring of components used for production solutions in the FireFly Ethconnect microservice since mid 2018. This was summarized in firefly-ethconnect#149, and cumulated in the creation of a new repository in 2022.
You can follow the progress and contribute in this repo: https://github.com/hyperledger/firefly-transaction-manager
"},{"location":"overview/supernode_concept/","title":"Introduction to Hyperledger FireFly","text":""},{"location":"overview/supernode_concept/#your-gateway-to-web3-technologies","title":"Your Gateway to Web3 Technologies","text":"Hyperledger FireFly is an organization's gateway to Web3, including all the blockchain ecosystems that they participate in.
Multiple blockchains, multiple token economies, and multiple business networks.
FireFly is not another blockchain implementation, rather it is a pluggable API Orchestration and Data layer, integrating into all of the different types of decentralized technologies that exist in Web3:
Hyperledger FireFly is a toolkit for building and connecting new full-stack decentralized applications (dapps), as well as integrating your existing core systems to the world of Web3.
It has a runtime engine, and it provides a data layer that synchronizes state from the blockchain and other Web3 technologies. It exposes an API and Event Bus to your business logic, that is reliable, developer friendly and ready for enterprise use.
We call this a Supernode - it sits between the application and the underlying infrastructure nodes, providing layers of additional function.
The concept of a Supernode has evolved over the last decade of enterprise blockchain projects, as developers realized that they need much more than a blockchain node for their projects to be successful.
Without a technology like Hyperledger FireFly, the application layer becomes extremely complex and fragile. Tens of thousands of lines of complex low-level \"plumbing\" / \"middleware\" code is required to integrate the web3 infrastructure into the application. This code provides zero unique business value to the solution, but can consume a huge proportion of the engineering budget and maintenance cost if built bespoke within a solution.
"},{"location":"overview/usage_patterns/","title":"Usage Patterns","text":"There are two modes of usage for Hyperledger Firefly: Web3 Gateway and Multiparty
A single runtime can operate in both of these modes, using different namespaces.
"},{"location":"overview/usage_patterns/#web3-gateway-mode","title":"Web3 Gateway Mode","text":"Web3 Gateway mode lets you interact with any Web3 application, regardless of whether Hyperledger FireFly is being used by other members of your business network.
In this mode you can:
Learn more about Web3 Gateway Mode.
"},{"location":"overview/usage_patterns/#multiparty-mode","title":"Multiparty Mode","text":"Multiparty mode is used to build multi-party systems, with a common application runtime deployed by each enterprise participant.
This allows sophisticated applications to be built, that all use the pluggable APIs of Hyperledger FireFly to achieve end-to-end business value in an enterprise context.
In this mode you can do everything you could do in Web3 Gateway mode, plus:
Learn more about Multiparty Mode.
"},{"location":"overview/key_components/","title":"Key Features","text":"Hyperledger FireFly provides a rich suite of features for building new applications, and connecting existing Web3 ecosystems to your business. In this section we introduce each core pillar of functionality.
"},{"location":"overview/key_components/apps/","title":"Apps","text":""},{"location":"overview/key_components/apps/#apps","title":"Apps","text":"Rapidly accelerating development of applications is a key feature of Hyperledger FireFly.
The toolkit is designed to support the full-stack of applications in the enterprise Web3 ecosystem, not just the Smart Contract layer.
Business logic APIs, back-office system integrations, and web/mobile user experiences are just as important to the overall Web3 use case.
These layers require a different developer skill-set to the on-chain Smart Contracts, and those developers must have the tools they need to work efficiently.
"},{"location":"overview/key_components/apps/#api-gateway","title":"API Gateway","text":"FireFly provides APIs that:
Learn more about deploying APIs for custom smart contracts in this tutorial
"},{"location":"overview/key_components/apps/#event-streams","title":"Event Streams","text":"The reality is that the only programming paradigm that works for a decentralized solutions, is an event-driven one.
All blockchain technologies are for this reason event-driven programming interfaces at their core.
In an overall solution, those on-chain events must be coordinated with off-chain private data transfers, and existing core-systems / human workflows.
This means great event support is a must:
Learn all about the Hyperledger FireFly Event Bus, and event-driven application architecture, in this reference section
"},{"location":"overview/key_components/apps/#api-generation","title":"API Generation","text":"The blockchain is going to be at the heart of your Web3 project. While usually small in overall surface area compared to the lines of code in the traditional application tiers, this kernel of mission-critical code is what makes your solution transformational compared to a centralized / Web 2.0 solution.
Whether the smart contract is hand crafted for your project, an existing contract on a public blockchain, or a built-in pattern of a framework like FireFly - it must be interacted with correctly.
So there can be no room for misinterpretation in the hand-off between the blockchain Smart Contract specialist, familiar with EVM contracts in Solidity/Vyper, Fabric chaincode (or maybe even raw block transition logic in Rust or Go), and the backend / full-stack application developer / core-system integrator.
Well documented APIs are the modern norm for this, and it is no different for blockchain.
This means Hyperledger FireFly provides:
The ability for every component to be pluggable is at the core of Hyperledger FireFly.
A microservices approach is used, combining code plug-points in the core runtime, with API extensibility to remote runtimes implemented in a variety of programming languages.
"},{"location":"overview/key_components/connectors/#extension-points","title":"Extension points","text":"Learn more about the plugin architecture here
"},{"location":"overview/key_components/connectors/#blockchain-connector-framework","title":"Blockchain Connector Framework","text":"The most advanced extension point is for the blockchain layer, where multiple layers of extensibility are provided to support the programming models, and behaviors of different blockchain technologies.
This framework has been proven with technologies as different as EVM based Layer 2 Ethereum Scaling solutions like Polygon, all the way to permissioned Hyperledger Fabric networks.
Check out instructions to connect to a list of remote blockchain networks here.
Find out more about the Blockchain Connector Framework here.
"},{"location":"overview/key_components/digital_assets/","title":"Digital Assets","text":""},{"location":"overview/key_components/digital_assets/#digital-asset-features","title":"Digital asset features","text":"The modelling, transfer and management of digital assets is the core programming foundation of blockchain.
Yet out of the box, raw blockchains designed to efficiently manage these assets in large ecosystems, do not come with all the building blocks needed by applications.
"},{"location":"overview/key_components/digital_assets/#token-api","title":"Token API","text":"Token standards have been evolving in the industry through standards like ERC-20/ERC-721, and the Web3 signing wallets that support these.
Hyperledger FireFly bring this same standardization to the application tier. Providing APIs that work across token standards, and blockchain implementations, providing consistent and interoperable support.
This means one application or set of back-end systems, can integrate with multiple blockchains, and different token implementations.
Pluggability here is key, so that the rules of governance of each digital asset ecosystem can be exposed and enforced. Whether tokens are fungible, non-fungible, or some hybrid in between.
Learn more about token standards for fungible tokens, and non-fungible tokens (NFTs) in this set of tutorials
"},{"location":"overview/key_components/digital_assets/#transfer-history-audit-trail","title":"Transfer history / audit trail","text":"For efficiency blockchains do not provide a direct ability to query historical transaction information.
Depending on the blockchain technology, even the current balance of your wallet can be complex to calculate - particularly for blockchain technologies based on an Unspent Transaction Output (UTXO) model.
So off-chain indexing of transaction history is an absolute must-have for any digital asset solution.
Hyperledger FireFly provides:
Wallet and signing-key management is a critical requirement for any blockchain solution, particularly those involving the transfer of digital assets between wallets.
Hyperledger FireFly provides you the ability to:
The reality of most Web3 scenarios is that only a small part of the overall use-case can be represented inside the blockchain or distributed ledger technology.
Some additional data flow is always required. This does not diminish the value of executing the kernel of the logic within the blockchain itself.
Hyperledger FireFly embraces this reality, and allows an organization to keep track of the relationship between the off-chain data flow, and the on-chain transactions.
Let's look at a few common examples:
"},{"location":"overview/key_components/flows/#digital-asset-transfers","title":"Digital Asset Transfers","text":"Examples of common data flows performed off-chain, include Know Your Customer (KYC) and Anti Money Laundering (AML) checks that need to be performed and validated before participating in transactions.
There might also be document management and business transaction flows required to verify the conditions are correct to digitally settle a transaction. Have the goods been delivered? Are the contracts in place?
In regulated enterprise scenarios it is common to see a 10-to-1 difference in the number of steps performed off-chain to complete a business transaction, vs. the number of steps performed on-chain.
These off-chain data flows might be coordinated with on-chain smart contracts that lock assets in digital escrow until the off-chain steps are completed by each party, and protect each party while the steps are being completed.
A common form of digital escrow is a Hashed Timelock Contract (HTLC).
"},{"location":"overview/key_components/flows/#non-fungible-tokens-nfts-and-hash-pinning","title":"Non-fungible Tokens (NFTs) and hash-pinning","text":"The data associated with an NFT might be as simple as a JSON document pointing at an interesting piece of artwork, or as complex a set of high resolution scans / authenticity documents representing a digital twin of a real world object.
Here the concept of a hash pinning is used - allowing anyone who has a copy of the original data to recreate the hash that is stored in the on-chain record.
With even the simplest NFT the business data is not stored on-chain, so simple data flow is always required to publish/download the off-chain data.
The data might be published publicly for anyone to download, or it might be sensitive and require a detailed permissioning flow to obtain it from a current holder of that data.
"},{"location":"overview/key_components/flows/#dynamic-nfts-and-business-transaction-flow","title":"Dynamic NFTs and Business Transaction Flow","text":"In an enterprise context, an NFT might have a dynamic ever-evolving trail of business transaction data associated with it. Different parties might have different views of that business data, based on their participation in the business transactions associated with it.
Here the NFT becomes a like a foreign key integrated across the core systems of a set of enterprises working together in a set of business transactions.
The data itself needs to be downloaded, retained, processed and rendered. Probably integrated to systems, acted upon, and used in multiple exchanges between companies on different blockchains, or off-chain.
The business process is accelerated through this Enterprise NFT on the blockchain - as all parties have matched or bound their own private data store to that NFT. This means they are confident to be executing a business transaction against the same person or thing in the world.
"},{"location":"overview/key_components/flows/#data-and-transaction-flow-patterns","title":"Data and Transaction Flow patterns","text":"Hyperledger FireFly provides the raw tools for building data and transaction flow patterns, such as storing, hashing and transferring data. It provides the event bus to trigger off-chain applications and integration to participate in the flows.
It also provides the higher level flow capabilities that are needed for multiple parties to build sophisticated transaction flows together, massively simplifying the application logic required:
Learn more in Multiparty Process Flows
"},{"location":"overview/key_components/orchestration_engine/","title":"Orchestration Engine","text":""},{"location":"overview/key_components/orchestration_engine/#firefly-core","title":"FireFly Core","text":"At the core of Hyperledger FireFly is an event-driven engine that routes, indexed, aggregates, and sequences data to and from the blockchain, and other connectors.
"},{"location":"overview/key_components/orchestration_engine/#data-layer","title":"Data Layer","text":"Your own private view of the each network you connect:
Whether a few dozen companies in a private blockchain consortium, or millions of users connected to a public blockchain network - one thing is always true:
Decentralized applications are event-driven.
In an enterprise context, you need to think not only about how those events are being handled and made consistent within the blockchain layer, but also how those events are being processed and integrated to your core systems.
FireFly provides you with the reliable streams of events you need, as well as the interfaces to subscribe to those events and integrate them into your core systems.
Learn more about the event bus and event-driven programming in this reference document
"},{"location":"overview/key_components/security/","title":"Security","text":""},{"location":"overview/key_components/security/#api-security","title":"API Security","text":"Hyperledger FireFly provides a pluggable infrastructure for authenticating API requests.
Each namespace can be configured with a different authentication plugin, such that different teams can have different access to resources on the same FireFly server.
A reference plugin implementation is provided for HTTP Basic Auth, combined with a htpasswd verification of passwords with a bcrypt encoding.
See this config section for details, and the reference implementation in Github
Pre-packaged vendor extensions to Hyperledger FireFly are known to be available, addressing more comprehensive role-based access control (RBAC) and JWT/OAuth based security models.
"},{"location":"overview/key_components/security/#data-partitioning-and-tenancy","title":"Data Partitioning and Tenancy","text":"Namespaces also provide a data isolation system for different applications / teams / tenants sharing a Hyperledger FireFly node.
Data is partitioned within the FireFly database by namespace. It is also possible to increase the separation between namespaces, by using separate database configurations. For example to different databases or table spaces within a single database server, or even to different database servers.
"},{"location":"overview/key_components/security/#private-data-exchange","title":"Private Data Exchange","text":"FireFly has a pluggable implementation of a private data transfer bus. This transport supports both structured data (conforming to agreed data formats), and large unstructured data & documents.
A reference microservice implementation is provided for HTTPS point-to-point connectivity with mutual TLS encryption.
See the reference implementation in Github
Pre-packaged vendor extensions to Hyperledger FireFly are known to be available, addressing message queue based reliable delivery of messages, hub-and-spoke connectivity models, chunking of very large file payloads, and end-to-end encryption.
Learn more about these private data flows in Multiparty Process Flows.
"},{"location":"overview/key_components/tools/","title":"Tools","text":""},{"location":"overview/key_components/tools/#firefly-cli","title":"FireFly CLI","text":"The FireFly CLI can be used to create local FireFly stacks for offline development of blockchain apps. This allows developers to rapidly iterate on their idea without needing to set up a bunch of infrastructure before they can write the first line of code.
"},{"location":"overview/key_components/tools/#firefly-sandbox","title":"FireFly Sandbox","text":"The FireFly Sandbox sits logically outside the Supernode, and it acts like an \"end-user\" application written to use FireFly's API. In your setup, you have one Sandbox per member, each talking to their own FireFly API. The purpose of the Sandbox is to provide a quick and easy way to try out all of the fundamental building blocks that FireFly provides. It also shows developers, through example code snippets, how they would implement the same functionality in their own app's backend.
\ud83d\uddd2 Technical details: The FireFly Sandbox is an example \"full-stack\" web app. It has a backend written in TypeScript / Node.js, and a frontend in TypeScript / React. When you click a button in your browser, the frontend makes a request to the backend, which then uses the FireFly Node.js SDK to make requests to FireFly's API.
"},{"location":"overview/key_components/tools/#firefly-explorer","title":"FireFly Explorer","text":"The FireFly explorer is a part of FireFly Core itself. It is a view into the system that allows operators to monitor the current state of the system and investigate specific transactions, messages, and events. It is also a great way for developers to see the results of running their code against FireFly's API.
"},{"location":"overview/multiparty/","title":"Enterprise multi-party systems","text":""},{"location":"overview/multiparty/#introduction","title":"Introduction","text":"Multiparty mode has all the features in Gateway mode with the added benefit of multi-party process flows.
A multi-party system is a class of application empowered by the technology revolution of blockchain digital ledger technology (DLT), and emerging cryptographic proof technologies like zero-knowledge proofs (ZKPs) and trusted execution environments (TEEs).
By combining these technologies with existing best practice technologies for data security in regulated industries, multi-party systems allow businesses to collaborate in ways previously impossible.
Through agreement on a common source of truth, such as the completion of a step in a business process to proceed, or the existence and ownership of a unique asset, businesses can cut out huge inefficiencies in existing multi-party processes.
New business and transaction models can be achieved, unlocking value in assets and data that were previously siloed within a single organization. Governance and incentive models can be created to enable secure collaboration in new ways, without compromising the integrity of an individual organization.
The technology is most powerful in ecosystems of \"coopetition\", where privacy and security requirements are high. Multi-party systems establish new models of trust, with easy to prove outcomes that minimize the need for third party arbitration, and costly investigation into disputes.
"},{"location":"overview/multiparty/#points-of-difference","title":"Points of difference","text":"Integration with existing systems of record is critical to unlock the potential of these new ecosystems. So multi-party systems embrace the existing investments of each party, rather than seeking to unify or replace them.
Multi-party systems are different from centralized third-party systems, because each party retains sovereignty over:
There are many multiparty use cases. An example for healthcare is detailed below.
Patient care requires multiple entities to work together including healthcare providers, insurance companies, and medical systems. Sharing data between these parties is inefficient and prone to errors and patient information must be kept secure and up to date. Blockchain's shared ledger makes it possible to automate data sharing while ensuring accuracy and privacy.
In a Multi-party FireFly system, entities are able to share data privately as detailed in the \"Data Exchange\" section. For example, imagine a scenario where there is one healthcare provider and two insurance companies operating in a multi-party system. Insurance company A may send private data to the healthcare provider that insurance company B is not privy to. While insurance company B may not know the contents of data transferred, it may verify that a transfer of data did occur. This validation is all thats needed to maintain an up to date state of the blockchain.
In a larger healthcare ecosystem with many members, a similar concept may emerge with multiple variations of members.
"},{"location":"overview/multiparty/broadcast/","title":"Broadcast / shared data","text":""},{"location":"overview/multiparty/broadcast/#introduction","title":"Introduction","text":"Multi-party systems are about establishing a shared source of truth, and often that needs to include certain reference data that is available to all parties in the network. The data needs to be \"broadcast\" to all members, and also need to be available to new members that join the network
"},{"location":"overview/multiparty/broadcast/#blockchain-backed-broadcast","title":"Blockchain backed broadcast","text":"In order to maintain a complete history of all broadcast data for new members joining the network, FireFly uses the blockchain to sequence the broadcasts with pinning transactions referring to the data itself.
Using the blockchain also gives a global order of events for these broadcasts, which allows them to be processed by each member in a way that allows them to derive the same result - even though the processing logic on the events themselves is being performed independently by each member.
For more information see Multiparty Event Sequencing.
"},{"location":"overview/multiparty/broadcast/#shared-data","title":"Shared data","text":"The data included in broadcasts is not recorded on the blockchain. Instead a pluggable shared storage mechanism is used to contain the data itself. The on-chain transaction just contains a hash of the data that is stored off-chain.
This is because the data itself might be too large to be efficiently stored and transferred via the blockchain itself, or subject to deletion at some point in the future through agreement by the members in the network.
While the data should be reliably stored with visibility to all members of the network, the data can still be secured from leakage outside of the network.
The InterPlanetary File System (IPFS) is an example of a distributed technology for peer-to-peer storage and distribution of such data in a decentralized multi-party system. It provides secure connectivity between a number of nodes, combined with a decentralized index of data that is available, and native use of hashes within the technology as the way to reference data by content.
"},{"location":"overview/multiparty/broadcast/#firefly-built-in-broadcasts","title":"FireFly built-in broadcasts","text":"FireFly uses the broadcast mechanism internally to distribute key information to all parties in the network:
These definitions rely on the same assurances provided by blockchain backed broadcast that FireFly applications do.
Private data exchange is the way most enterprise business-to-business communication happens today. One party privately sends data to another, over a pipe that has been agreed as sufficiently secure between the two parties. That might be a REST API, SOAP Web Service, FTP / EDI, Message Queue (MQ), or other B2B Gateway technology.
The ability to perform these same private data exchanges within a multi-party system is critical. In fact it's common for the majority of business data continue to transfer over such interfaces.
So real-time application to application private messaging, and private transfer of large blobs/documents, are first class constructs in the FireFly API.
"},{"location":"overview/multiparty/data_exchange/#qualities-of-service","title":"Qualities of service","text":"FireFly recognizes that a multi-party system will need to establish a secure messaging backbone, with the right qualities of service for their requirements. So the implementation is pluggable, and the plugin interface embraces the following quality of service characteristics that differ between different implementations.
A reference implementation of a private data exchange is provided as part of the FireFly project. This implementation uses peer-to-peer transfer over a synchronous HTTPS transport, backed by Mutual TLS authentication. X509 certificate exchange is orchestrated by FireFly, such that self-signed certificates can be used (or multiple PKI trust roots) and bound to the blockchain-backed identities of the organizations in FireFly.
See hyperledger/firefly-dataexchange-https
"},{"location":"overview/multiparty/deterministic/","title":"Deterministic Compute","text":""},{"location":"overview/multiparty/deterministic/#introduction","title":"Introduction","text":"A critical aspect of designing a multi-party systems, is choosing where you exploit the blockchain and other advanced cryptography technology to automate agreement between parties.
Specifically where you rely on the computation itself to come up with a result that all parties can independently trust. For example because all parties performed the same computation independently and came up with the same result, against the same data, and agreed to that result using a consensus algorithm.
The more sophisticated the agreement is you want to prove, the more consideration needs to be taken into factors such as:
FireFly embraces the fact that different use cases, will make different decisions on how much of the agreement should be enforced through deterministic compute.
Also that multi-party systems include a mixture of approaches in addition to deterministic compute, including traditional off-chain secure HTTP/Messaging, documents, private non-deterministic logic, and human workflows.
"},{"location":"overview/multiparty/deterministic/#the-fundamental-building-blocks","title":"The fundamental building blocks","text":"There are some fundamental types of deterministic computation, that can be proved with mature blockchain technology, and all multi-party systems should consider exploiting:
There are use cases where a deterministic agreement on computation is desired, but the data upon which the execution is performed cannot be shared between all the parties.
For example proving total conservation of value in a token trading scenario, without knowing who is involved in the individual transactions. Or providing you have access to a piece of data, without disclosing what that data is.
Technologies exist that can solve these requirements, with two major categories:
FireFly today provides an orchestration engine that's helpful in coordinating the inputs, outputs, and execution of such advanced cryptography technologies.
Active collaboration between the FireFly and other projects like Hyperledger Avalon, and Hyperledger Cactus, is evolving how these technologies can plug-in with higher level patterns.
"},{"location":"overview/multiparty/deterministic/#complementary-approaches-to-deterministic-computation","title":"Complementary approaches to deterministic computation","text":"Enterprise multi-party systems usually operate differently to end-user decentralized applications. In particular, strong identity is established for the organizations that are involved, and those organizations usually sign legally binding commitments around their participation in the network. Those businesses then bring on-board an ecosystem of employees and or customers that are end-users to the system.
So the shared source of truth empowered by the blockchain and other cryptography are not the only tools that can be used in the toolbox to ensure correct behavior. Recognizing that there are real legal entities involved, that are mature and regulated, does not undermine the value of the blockchain components. In fact it enhances it.
A multi-party system can use just enough of this secret sauce in the right places, to change the dynamics of trust such that competitors in a market are willing to create value together that could never be created before.
Or create a system where parties can share data with each other while still conforming to their own regulatory and audit commitments, that previously would have been impossible to share.
Not to be overlooked is the sometimes astonishing efficiency increase that can be added to existing business relationships, by being able to agree the order and sequence of a set of events. Having the tools to digitize processes that previously took physical documents flying round the world, into near-immediate digital agreement where the arbitration of a dispute can be resolved at a tiny fraction of what would have been possible without a shared and immutable audit trail of who said what when.
"},{"location":"overview/multiparty/multiparty_flow/","title":"Multiparty Process Flows","text":""},{"location":"overview/multiparty/multiparty_flow/#flow-features","title":"Flow features","text":"Data, value, and process flow are how decentralized systems function. In an enterprise context not all of this data can be shared with all parties, and some is very sensitive.
"},{"location":"overview/multiparty/multiparty_flow/#private-data-flow","title":"Private data flow","text":"Managing the flows of data so that the right information is shared with the right parties, at the right time, means thinking carefully about what data flows over what channel.
The number of enterprise solutions where all data can flow directly through the blockchain, is vanishingly small.
Coordinating these different data flows is often one of the biggest pieces of heavy lifting solved on behalf of the application by a robust framework like FireFly:
Web3 has the potential to transform how ecosystems interact. Digitally transforming legacy process flows, by giving deterministic outcomes that are trusted by all parties, backed by new forms of digital trust between parties.
Some of the most interesting use cases require complex multi-step business process across participants. The Web3 version of business process management, comes with a some new challenges.
So you need the platform to:
Business processes need data, and that data comes in many shapes and sizes.
The platform needs to handle all of them:
The ability to globally sequence events across parties is a game changing capability of a multiparty system. FireFly is designed to allow developers to harnesses that power in the application layer, to build sophisticated multi-party APIs and user experiences.
Building a successful multi-party system is often about business experimentation, and business results. Proving the efficiency gains, and new business models, made possible by working together in a new way under a new system of trust.
Things that can get in the way of that innovation, can include concerns over data privacy, technology maturity, and constraints on autonomy of an individual party in the system. An easy to explain position on how new technology components are used, where data lives, and how business process independence is maintained can really help parties make the leap of faith necessary to take the step towards a new model.
Keys to success often include building great user experiences that help digitize clunky decades old manual processes. Also easy to integrate with APIs, what embrace the existing core systems of record that are establish within each party.
"},{"location":"overview/multiparty/multiparty_flow/#consider-the-on-chain-toolbox-too","title":"Consider the on-chain toolbox too","text":"There is a huge amount of value that deterministic execution of multi-party logic within the blockchain can add. However, the more compute is made fully deterministic via a blockchain consensus algorithm validated by multiple parties beyond those with a business need for access to the data, the more sensitivity needs to be taken to data privacy. Also bear in mind any data that is used in this processing becomes immutable - it can never be deleted.
The core constructs of blockchain are a great place to start. Almost every process can be enhanced with pre-built fungible and non-fungible tokens, for example. Maybe it's to build a token economy that enhances the value parties get from the system, or to encourage healthy participation (and discourage bad behavior). Or maybe it's to track exactly which party owns a document, asset, or action within a process using NFTs.
On top of this you can add advanced tools like digital escrow, signature / threshold based voting on outcomes, and atomic swaps of value/ownership.
The investment in building this bespoke on-chain logic is higher than building the off-chain pieces (and there are always some off-chain pieces as we've discussed), so it's about finding the kernel of value the blockchain can provide to differentiate your solution from a centralized database solution.
The power provided by deterministic sequencing of events, attested by signatures, and pinned to private data might be sufficient for some cases. In others the token constructs are the key value that differentiates the decentralized ecosystem. Whatever it is, it's important it is identified and crafted carefully.
Note that advanced privacy preserving techniques such as zero-knowledge proofs (ZKP) are gaining traction and hardening in their production readiness and efficiency. Expect these to play an increasing role in the technology stack of multiparty systems (and Hyperledger FireFly) in the future.
Learn more in the Deterministic Compute section.
"},{"location":"reference/api_post_syntax/","title":"API POST Syntax","text":""},{"location":"reference/api_post_syntax/#syntax-overview","title":"Syntax Overview","text":"Endpoints that allow submitting a transaction allow an optional query parameter called confirm. When confirm=true is set in the query string, FireFly will wait to send an HTTP response until the message has been confirmed. This means, where a blockchain transaction is involved, the HTTP request will not return until the blockchain transaction is complete.
This is useful for endpoints such as registration, where the client app cannot proceed until the transaction is complete and the member/node is registered. Rather than making a request to register a member/node and then repeatedly polling the API to check to see if it succeeded, an HTTP client can use this query parameter and block until registration is complete.
NOTE: This does not mean that any other member of the network has received, processed, or responded to the message. It just means that the transaction is complete from the perspective of the FireFly node to which the transaction was submitted.
"},{"location":"reference/api_post_syntax/#example-api-call","title":"Example API Call","text":"POST /api/v1/messages/broadcast?confirm=true
This will broadcast a message and wait for the message to be confirmed before returning.
"},{"location":"reference/api_query_syntax/","title":"API Query Syntax","text":""},{"location":"reference/api_query_syntax/#syntax-overview","title":"Syntax Overview","text":"REST collections provide filter, skip, limit and sort support.
field=[modifiers][operator]match-stringGET /api/v1/messages?confirmed=>0&type=broadcast&topic=t1&topic=t2&context=@someprefix&sort=sequence&descending&skip=100&limit=50
This states:
confirmed greater than 0type exactly equal to broadcasttopic exactly equal to t1 or t2context containing the case-sensitive string someprefixsequence in descending orderlimit of 50 and skip of 100 (e.g. get page 3, with 50/page)Table of filter operations, which must be the first character of the query string (after the = in the above URL path example)
Operators are a type of comparison operation to perform against the match string.
Operator Description= Equal (none) Equal (shortcut) @ Containing ^ Starts with $ Ends with << Less than < Less than (shortcut) <= Less than or equal >> Greater than > Greater than (shortcut) >= Greater than or equal Shortcuts are only safe to use when your match string starts with a-z, A-Z, 0-9, - or _.
Modifiers can appear before the operator, to change its behavior.
Modifier Description! Not - negates the match : Case insensitive ? Treat empty match string as null [ Combine using AND on the same field ] Combine using OR on the same field (default)"},{"location":"reference/api_query_syntax/#detailed-examples","title":"Detailed examples","text":"Example Description cat Equals \"cat\" =cat Equals \"cat\" (same) !=cat Not equal to \"cat\" :=cat Equal to \"CAT\", \"cat\", \"CaT etc. !:cat Not equal to \"CAT\", \"cat\", \"CaT etc. =!cat Equal to \"!cat\" (! is after operator) ^cats/ Starts with \"cats/\" $_cat Ends with with \"_cat\" !:^cats/ Does not start with \"cats/\", \"CATs/\" etc. !$-cat Does not end with \"-cat\" ?= Is null !?= Is not null"},{"location":"reference/api_query_syntax/#time-range-example","title":"Time range example","text":"For this case we need to combine multiple queries on the same created field using AND semantics (with the [) modifier:
?created=[>>2021-01-01T00:00:00Z&created=[<=2021-01-02T00:00:00Z\n So this means:
created greater than 2021-01-01T00:00:00ZANDcreated less than or equal to 2021-01-02T00:00:00ZThe receipt for a FireFly blockchain operation contains an extraInfo section that records additional information about the transaction. For example:
\"receipt\": {\n ...\n \"extraInfo\": [\n {\n {\n \"contractAddress\":\"0x87ae94ab290932c4e6269648bb47c86978af4436\",\n \"cumulativeGasUsed\":\"33812\",\n \"from\":\"0x2b1c769ef5ad304a4889f2a07a6617cd935849ae\",\n \"to\":\"0x302259069aaa5b10dc6f29a9a3f72a8e52837cc3\",\n \"gasUsed\":\"33812\",\n \"status\":\"0\",\n \"errorMessage\":\"Not enough tokens\", \n }\n }\n ],\n ...\n},\n The errorMessage field can be be set by a blockchain connector to provide FireFly and the end-user with more information about the reason why a tranasction failed. The blockchain connector can choose what information to include in errorMessage field. It may be set to an error message relating to the blockchain connector itself or an error message passed back from the blockchain or smart contract that was invoked.
If FireFly is configured to connect to a Besu EVM client, and Besu has been configured with the revert-reason-enabled=true setting (note - the default value for Besu is false) error messages passed to FireFly from the blockchain client itself will be set correctly in the FireFly blockchain operation. For example:
\"errorMessage\":\"Not enough tokens\" for a revert error string from a smart contractIf the smart contract uses a custom error type, Besu will return the revert reason to FireFly as a hexadecimal string but FireFly will be unable to decode it into. In this case the blockchain operation error message and return values will be set to:
\"errorMessage\":\"FF23053: Error return value for custom error: <revert hex string>\"returnValue\":\"<revert hex string>\"A future update to FireFly could be made to automatically decode custom error revert reasons if FireFly knows the ABI for the custom error. See FireFly issue 1466 which describes the current limitation.
If FireFly is configured to connect to Besu without revert-reason-enabled=true the error message will be set to:
\"errorMessage\":\"FF23054: Error return value unavailable\"The precise format of the error message in a blockchain operation can vary based on different factors. The sections below describe in detail how the error message is populted, with specific references to the firefly-evmconnect blockchain connector.
firefly-evmconnect Error Message","text":"The following section describes the way that the firefly-evmconnect plugin uses the errorMessage field. This serves both as an explanation of how EVM-based transaction errors will be formatted, and as a guide that other blockchain connectors may decide to follow.
The errorMessage field for a firefly-evmconnect transaction may contain one of the following:
\"FF23054: Error return value unavailable\"Not enough tokensrequire(requestedTokens <= allowance, \"Not enough tokens\");FF23053: Error return value for custom error: 0x1320fa6a00000000000000000000000000000000000000000000000000000000000000640000000000000000000000000000000000000000000000000000000000000010\nerror AllowanceTooSmall(uint256 requested, uint256 allowance);\n...\nrevert AllowanceTooSmall({ requested: 100, allowance: 20 });\nreturnValue of the extraInfo will be set to the raw byte string. For example: \"receipt\": {\n ...\n \"extraInfo\": [\n {\n {\n \"contractAddress\":\"0x87ae94ab290932c4e6269648bb47c86978af4436\",\n \"cumulativeGasUsed\":\"33812\",\n \"from\":\"0x2b1c769ef5ad304a4889f2a07a6617cd935849ae\",\n \"to\":\"0x302259069aaa5b10dc6f29a9a3f72a8e52837cc3\",\n \"gasUsed\":\"33812\",\n \"status\":\"0\",\n \"errorMessage\":\"FF23053: Error return value for custom error: 0x1320fa6a00000000000000000000000000000000000000000000000000000000000000640000000000000000000000000000000000000000000000000000000000000010\", \n \"returnValue\":\"0x1320fa6a00000000000000000000000000000000000000000000000000000000000000640000000000000000000000000000000000000000000000000000000000000010\"\n }\n }\n ],\n ...\n},\nThe ability of a blockchain connector such as firefly-evmconnect to retrieve the reason for a transaction failure, is dependent on by the configuration of the blockchain it is connected to. For an EVM blockchain the reason why a transaction failed is recorded with the REVERT op code, with a REASON set to the reason for the failure. By default, most EVM clients do not store this reason in the transaction receipt. This is typically to reduce resource consumption such as memory usage in the client. It is usually possible to configure an EVM client to store the revert reason in the transaction receipt. For example Hyperledger Besu\u2122 provides the --revert-reason-enabled configuration option. If the transaction receipt does not contain the revert reason it is possible to request that an EVM client re-run the transaction and return a trace of all of the op-codes, including the final REVERT REASON. This can be a resource intensive request to submit to an EVM client, and is only available on archive nodes or for very recent blocks.
The firefly-evmconnect blockchain connector attempts to obtain the reason for a transaction revert and include it in the extraInfo field. It uses the following mechanisms, in this order:
connector.traceTXForRevertReason configuration option is set to true, calls debug_traceTransaction to obtain a full trace of the transaction and extract the revert reason. By default, connector.traceTXForRevertReason is set to false to avoid submitting high-resource requests to the EVM client.If the revert reason can be obtained using either mechanism above, the revert reason bytes are decoded in the following way: - Attempts to decode the bytes as the standard Error(string) signature format and includes the decoded string in the errorMessage - If the reason is not a standard Error(String) error, sets the errorMessage to FF23053: Error return value for custom error: <raw hex string> and includes the raw byte string in the returnValue field.
Every FireFly Transaction can involve zero or more Operations. Blockchain operations are handled by the blockchain connector configured for the namespace and represent a blockchain transaction being handled by that connector.
"},{"location":"reference/blockchain_operation_status/#blockchain-operation-status_1","title":"Blockchain Operation Status","text":"A blockchain operation can require the connector to go through various stages of processing in order to successfully confirm the transaction on the blockchain. The orchestrator in FireFly receives updates from the connector to indicate when the operation has been completed and determine when the FireFly transaction as a whole has finished. These updates must contain enough information to correlate the operation to the FireFly transaction but it can be useful to see more detailed information about how the transaction was processed.
FireFly 1.2 introduced the concept of sub-status types that allow a blockchain connector to distinguish between the intermediate steps involved in progressing a transaction. It also introduced the concept of an action which a connector might carry out in order to progress between types of sub-status. This can be described as a state machine as shown in the following diagram:
To access detailed information about a blockchain operation FireFly 1.2 introduced a new query parameter, fetchStatus, to the /transaction/{txid}/operation/{opid} API. When FireFly receives an API request that includes the fetchStatus query parameter it makes a synchronous call directly to the blockchain connector, requesting all of blockchain transaction detail it has. This payload is then included in the FireFly operation response under a new detail field.
{\n \"id\": \"04a8b0c4-03c2-4935-85a1-87d17cddc20a\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"namespace\": \"ns1\",\n \"tx\": \"99543134-769b-42a8-8be4-a5f8873f969d\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n // Input used to initiate the blockchain operation\n },\n \"output\": {\n // Minimal blockchain operation data necessary\n // to resolve the FF transaction\n },\n \"detail\": {\n // Full blockchain operation information, including sub-status\n // transitions that took place for the operation to succeed.\n }\n}\n"},{"location":"reference/blockchain_operation_status/#detail-status-structure","title":"Detail Status Structure","text":"The structure of a blockchain operation follows the structure described in Operations. In FireFly 1.2, 2 new attributes were added to that structure to allow more detailed status information to be recorded:
The history field is designed to record an ordered list of sub-status changes that the transaction has gone through. Within each sub-status change are the actions that have been carried out to try and move the transaction on to a new sub-status. Some transactions might spend a long time going looping between different sub-status types so this field records the N most recent sub-status changes (where the size of N is determined by blockchain connector and its configuration). The follow example shows a transaction going starting at Received, moving to Tracking, and finally ending up as Confirmed. In order to move from Received to Tracking several actions were performed: AssignNonce, RetrieveGasPrice, and SubmitTransaction.
{\n ...\n \"lastSubmit\": \"2023-01-27T17:11:41.222375469Z\",\n \"nonce\": \"14\",\n \"history\": [\n {\n \"subStatus\": \"Received\",\n \"time\": \"2023-01-27T17:11:41.122965803Z\",\n \"actions\": [\n {\n \"action\": \"AssignNonce\",\n \"count\": 1,\n \"lastInfo\": {\n \u2003 \"nonce\": \"14\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"time\": \"2023-01-27T17:11:41.122967136Z\"\n },\n \u2003 {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"0\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"time\": \"2023-01-27T17:11:41.161213094Z\"\n },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x4c37de1cf320a1d5c949082bbec8ad5fe918e6621cec3948d609ec3f7deac243\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \u2003 \"time\": \"2023-01-27T17:11:41.222374553Z\"\n \u2003 }\n \u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:11:41.222400219Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"ReceiveReceipt\",\n \u2003\u2003\u2003\u2003 \"count\": 2,\n \u2003\u2003\u2003\u2003 \"lastInfo\": {\n \u2003\u2003\u2003\u2003\u2003 \"protocolId\": \"000001265122/000000\"\n \u2003\u2003\u2003\u2003 },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:11:57.93120838Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:11:47.930332625Z\"\n \u2003\u2003\u2003 },\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"Confirm\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:12:02.660275549Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:12:02.660275382Z\"\n \u2003\u2003\u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Confirmed\",\n \u2003\u2003 \"time\": \"2023-01-27T17:12:02.660309382Z\",\n \u2003\u2003 \"actions\": [],\n \u2003 }\n ]\n ...\n}\n Because the history field is a FIFO structure describing the N most recent sub-status changes, some early sub-status changes or actions may be lost over time. For example an action of assignNonce might only happen once when the transaction is first processed by the connector. The historySummary field ensures that a minimal set of information is kept about every single subStatus type and action that has been recorded.
{\n ...\n \"historySummary\": [\n {\n \"count\": 1,\n \u2003 \"firstOccurrence\": \"2023-01-27T17:11:41.122966136Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.122966136Z\",\n \u2003 \"subStatus\": \"Received\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"action\": \"AssignNonce\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"action\": \"RetrieveGasPrice\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \"action\": \"SubmitTransaction\"\n },\n {\n \u2003 \"count\": 1,\n \u2003 \"firstOccurrence\": \"2023-01-27T17:11:41.222400678Z\",\n \"lastOccurrence\": \"\",\n \u2003 \"subStatus\": \"Tracking\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:57.93120838Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:57.93120838Z\",\n \"action\": \"ReceiveReceipt\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:12:02.660309382Z\",\n \"lastOccurrence\": \"2023-01-27T17:12:02.660309382Z\",\n \"action\": \"Confirm\"\n },\n {\n \u2003 \"count\": 1,\n \u2003 \"firstOccurrence\": \"2023-01-27T17:12:02.660309757Z\",\n \"lastOccurrence\": \"2023-01-27T17:12:02.660309757Z\",\n \u2003 \"subStatus\": \"Confirmed\"\n }\n ]\n}\n"},{"location":"reference/blockchain_operation_status/#public-chain-operations","title":"Public Chain Operations","text":"Blockchain transactions submitted to a public chain, for example to Polygon PoS, might take longer and involve more sub-status transitions before being confirmed. One reason for this could be because of gas price fluctuations of the chain. In this case the history for a public blockchain operation might include a large number of subStatus entries. Using the example sub-status values above, a blockchain operation might move from Tracking to Stale, back to Tracking, back to Stale and so on.
Below is an example of the history for a public blockchain operation.
{\n ...\n \"lastSubmit\": \"2023-01-27T17:11:41.222375469Z\",\n \"nonce\": \"14\",\n \"history\": [\n {\n \"subStatus\": \"Received\",\n \"time\": \"2023-01-27T17:11:41.122965803Z\",\n \"actions\": [\n {\n \"action\": \"AssignNonce\",\n \"count\": 1,\n \"lastInfo\": {\n \u2003 \"nonce\": \"1\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"time\": \"2023-01-27T17:11:41.122967136Z\"\n },\n \u2003 {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"34422243\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"time\": \"2023-01-27T17:11:41.161213094Z\"\n },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x83ba5e1cf320a1d5c949082bbec8ae7fe918e6621cec39478609ec3f7deacbdb\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \u2003 \"time\": \"2023-01-27T17:11:41.222374553Z\"\n \u2003 }\n \u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:11:41.222400219Z\",\n \u2003 \"actions\": [],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Stale\",\n \"time\": \"2023-01-27T17:13:21.222100434Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"RetrieveGasPrice\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"44436243\"\n },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:13:22.93120838Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:13:22.93120838Z\"\n \u2003\u2003\u2003 },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x7b3a5e1ccbc0a1d5c949082bbec8ae7fe918e6621cec39478609ec7aea6103d5\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:13:32.656374637Z\",\n \u2003 \"time\": \"2023-01-27T17:13:32.656374637Z\"\n \u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:13:33.434400219Z\",\n \u2003 \"actions\": [],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Stale\",\n \"time\": \"2023-01-27T17:15:21.222100434Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"RetrieveGasPrice\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"52129243\"\n },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:15:22.93120838Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:15:22.93120838Z\"\n \u2003\u2003\u2003 },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x89995e1ccbc0a1d5c949082bbec8ae7fe918e6621cec39478609ec7a8c64abc\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:15:32.656374637Z\",\n \u2003 \"time\": \"2023-01-27T17:15:32.656374637Z\"\n \u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:15:33.434400219Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"ReceiveReceipt\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \u2003\u2003\u2003\u2003 \"lastInfo\": {\n \u2003\u2003\u2003\u2003\u2003 \"protocolId\": \"000004897621/000000\"\n \u2003\u2003\u2003\u2003 },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:15:33.94120833Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:15:33.94120833Z\"\n \u2003\u2003\u2003 },\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"Confirm\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:16:02.780275549Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:16:02.780275382Z\"\n \u2003\u2003\u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Confirmed\",\n \u2003\u2003 \"time\": \"2023-01-27T17:16:03.990309381Z\",\n \u2003\u2003 \"actions\": [],\n \u2003 }\n ]\n ...\n}\n"},{"location":"reference/config/","title":"Configuration Reference","text":""},{"location":"reference/config/#admin","title":"admin","text":"Key Description Type Default Value enabled Deprecated - use spi.enabled instead boolean <nil>"},{"location":"reference/config/#api","title":"api","text":"Key Description Type Default Value defaultFilterLimit The maximum number of rows to return if no limit is specified on an API request int 25 dynamicPublicURLHeader Dynamic header that informs the backend the base public URL for the request, in order to build URL links in OpenAPI/SwaggerUI string <nil> maxFilterLimit The largest value of limit that an HTTP client can specify in a request int 1000 passthroughHeaders A list of HTTP request headers to pass through to dependency microservices []string [] requestMaxTimeout The maximum amount of time that an HTTP client can specify in a Request-Timeout header to keep a specific request open time.Duration 10m requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 120s"},{"location":"reference/config/#assetmanager","title":"asset.manager","text":"Key Description Type Default Value keyNormalization Mechanism to normalize keys before using them. Valid options are blockchain_plugin - use blockchain plugin (default) or none - do not attempt normalization (deprecated - use namespaces.predefined[].asset.manager.keyNormalization) string blockchain_plugin"},{"location":"reference/config/#batchmanager","title":"batch.manager","text":"Key Description Type Default Value minimumPollDelay The minimum time the batch manager waits between polls on the DB - to prevent thrashing time.Duration 100ms pollTimeout How long to wait without any notifications of new messages before doing a page query time.Duration 30s readPageSize The size of each page of messages read from the database into memory when assembling batches int 100"},{"location":"reference/config/#batchretry","title":"batch.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#blobreceiverretry","title":"blobreceiver.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initialDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#blobreceiverworker","title":"blobreceiver.worker","text":"Key Description Type Default Value batchMaxInserts The maximum number of items the blob receiver worker will insert in a batch int 200 batchTimeout The maximum amount of the the blob receiver worker will wait time.Duration 50ms count The number of blob receiver workers int 5"},{"location":"reference/config/#broadcastbatch","title":"broadcast.batch","text":"Key Description Type Default Value agentTimeout How long to keep around a batching agent for a sending identity before disposal string 2m payloadLimit The maximum payload size of a batch for broadcast messages BytesSize 800Kb size The maximum number of messages that can be packed into a batch int 200 timeout The timeout to wait for a batch to fill, before sending time.Duration 1s"},{"location":"reference/config/#cache","title":"cache","text":"Key Description Type Default Value enabled Enables caching, defaults to true boolean true"},{"location":"reference/config/#cacheaddressresolver","title":"cache.addressresolver","text":"Key Description Type Default Value limit Max number of cached items for address resolver int 1000 ttl Time to live of cached items for address resolver string 24h"},{"location":"reference/config/#cachebatch","title":"cache.batch","text":"Key Description Type Default Value limit Max number of cached items for batches int 100 ttl Time to live of cache items for batches string 5m"},{"location":"reference/config/#cacheblockchain","title":"cache.blockchain","text":"Key Description Type Default Value limit Max number of cached items for blockchain int 100 ttl Time to live of cached items for blockchain string 5m"},{"location":"reference/config/#cacheblockchainevent","title":"cache.blockchainevent","text":"Key Description Type Default Value limit Max number of cached blockchain events for transactions int 1000 ttl Time to live of cached blockchain events for transactions string 5m"},{"location":"reference/config/#cacheeventlistenertopic","title":"cache.eventlistenertopic","text":"Key Description Type Default Value limit Max number of cached items for blockchain listener topics int 100 ttl Time to live of cached items for blockchain listener topics string 5m"},{"location":"reference/config/#cachegroup","title":"cache.group","text":"Key Description Type Default Value limit Max number of cached items for groups int 50 ttl Time to live of cached items for groups string 1h"},{"location":"reference/config/#cacheidentity","title":"cache.identity","text":"Key Description Type Default Value limit Max number of cached identities for identity manager int 100 ttl Time to live of cached identities for identity manager string 1h"},{"location":"reference/config/#cachemessage","title":"cache.message","text":"Key Description Type Default Value size Max size of cached messages for data manager BytesSize 50Mb ttl Time to live of cached messages for data manager string 5m"},{"location":"reference/config/#cachemethods","title":"cache.methods","text":"Key Description Type Default Value limit Max number of cached items for schema validations on blockchain methods int 200 ttl Time to live of cached items for schema validations on blockchain methods string 5m"},{"location":"reference/config/#cacheoperations","title":"cache.operations","text":"Key Description Type Default Value limit Max number of cached items for operations int 1000 ttl Time to live of cached items for operations string 5m"},{"location":"reference/config/#cachetokenpool","title":"cache.tokenpool","text":"Key Description Type Default Value limit Max number of cached items for token pools int 100 ttl Time to live of cached items for token pool string 1h"},{"location":"reference/config/#cachetransaction","title":"cache.transaction","text":"Key Description Type Default Value size Max size of cached transactions BytesSize 1Mb ttl Time to live of cached transactions string 5m"},{"location":"reference/config/#cachevalidator","title":"cache.validator","text":"Key Description Type Default Value size Max size of cached validators for data manager BytesSize 1Mb ttl Time to live of cached validators for data manager string 1h"},{"location":"reference/config/#config","title":"config","text":"Key Description Type Default Value autoReload Monitor the configuration file for changes, and automatically add/remove/reload namespaces and plugins boolean <nil>"},{"location":"reference/config/#cors","title":"cors","text":"Key Description Type Default Value credentials CORS setting to control whether a browser allows credentials to be sent to this API boolean true debug Whether debug is enabled for the CORS implementation boolean false enabled Whether CORS is enabled boolean true headers CORS setting to control the allowed headers []string [*] maxAge The maximum age a browser should rely on CORS checks time.Duration 600 methods CORS setting to control the allowed methods []string [GET POST PUT PATCH DELETE] origins CORS setting to control the allowed origins []string [*]"},{"location":"reference/config/#debug","title":"debug","text":"Key Description Type Default Value address The HTTP interface the go debugger binds to string localhost port An HTTP port on which to enable the go debugger int -1"},{"location":"reference/config/#downloadretry","title":"download.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initialDelay The initial retry delay time.Duration 100ms maxAttempts The maximum number attempts int 100 maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#downloadworker","title":"download.worker","text":"Key Description Type Default Value count The number of download workers int 10 queueLength The length of the work queue in the channel to the workers - defaults to 2x the worker count int <nil>"},{"location":"reference/config/#eventaggregator","title":"event.aggregator","text":"Key Description Type Default Value batchSize The maximum number of records to read from the DB before performing an aggregation run BytesSize 200 batchTimeout How long to wait for new events to arrive before performing aggregation on a page of events time.Duration 0ms firstEvent The first event the aggregator should process, if no previous offest is stored in the DB. Valid options are oldest or newest string oldest pollTimeout The time to wait without a notification of new events, before trying a select on the table time.Duration 30s rewindQueryLimit Safety limit on the maximum number of records to search when performing queries to search for rewinds int 1000 rewindQueueLength The size of the queue into the rewind dispatcher int 10 rewindTimeout The minimum time to wait for rewinds to accumulate before resolving them time.Duration 50ms"},{"location":"reference/config/#eventaggregatorretry","title":"event.aggregator.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 100ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#eventdbevents","title":"event.dbevents","text":"Key Description Type Default Value bufferSize The size of the buffer of change events BytesSize 100"},{"location":"reference/config/#eventdispatcher","title":"event.dispatcher","text":"Key Description Type Default Value batchTimeout A short time to wait for new events to arrive before re-polling for new events time.Duration 0ms bufferLength The number of events + attachments an individual dispatcher should hold in memory ready for delivery to the subscription int 5 pollTimeout The time to wait without a notification of new events, before trying a select on the table time.Duration 30s"},{"location":"reference/config/#eventdispatcherretry","title":"event.dispatcher.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 <nil> initDelay The initial retry delay time.Duration <nil> maxDelay The maximum retry delay time.Duration <nil>"},{"location":"reference/config/#eventtransports","title":"event.transports","text":"Key Description Type Default Value default The default event transport for new subscriptions string websockets enabled Which event interface plugins are enabled boolean [websockets webhooks]"},{"location":"reference/config/#eventswebhooks","title":"events.webhooks","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s"},{"location":"reference/config/#eventswebhooksauth","title":"events.webhooks.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#eventswebhooksproxy","title":"events.webhooks.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to connect through string <nil>"},{"location":"reference/config/#eventswebhooksretry","title":"events.webhooks.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#eventswebhooksthrottle","title":"events.webhooks.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#eventswebhookstls","title":"events.webhooks.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#eventswebsockets","title":"events.websockets","text":"Key Description Type Default Value readBufferSize WebSocket read buffer size BytesSize 16Kb writeBufferSize WebSocket write buffer size BytesSize 16Kb"},{"location":"reference/config/#histograms","title":"histograms","text":"Key Description Type Default Value maxChartRows The maximum rows to fetch for each histogram bucket int 100"},{"location":"reference/config/#http","title":"http","text":"Key Description Type Default Value address The IP address on which the HTTP API should listen IP Address string 127.0.0.1 port The port on which the HTTP API should listen int 5000 publicURL The fully qualified public URL for the API. This is used for building URLs in HTTP responses and in OpenAPI Spec generation URL string <nil> readTimeout The maximum time to wait when reading from an HTTP connection time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout The maximum time to wait when writing to an HTTP connection time.Duration 15s"},{"location":"reference/config/#httpauth","title":"http.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#httpauthbasic","title":"http.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#httptls","title":"http.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#log","title":"log","text":"Key Description Type Default Value compress Determines if the rotated log files should be compressed using gzip boolean <nil> filename Filename is the file to write logs to. Backup log files will be retained in the same directory string <nil> filesize MaxSize is the maximum size the log file before it gets rotated BytesSize 100m forceColor Force color to be enabled, even when a non-TTY output is detected boolean <nil> includeCodeInfo Enables the report caller for including the calling file and line number, and the calling function. If using text logs, it uses the logrus text format rather than the default prefix format. boolean false level The log level - error, warn, info, debug, trace string info maxAge The maximum time to retain old log files based on the timestamp encoded in their filename time.Duration 24h maxBackups Maximum number of old log files to retain int 2 noColor Force color to be disabled, event when TTY output is detected boolean <nil> timeFormat Custom time format for logs Time format string 2006-01-02T15:04:05.000Z07:00 utc Use UTC timestamps for logs boolean false"},{"location":"reference/config/#logjson","title":"log.json","text":"Key Description Type Default Value enabled Enables JSON formatted logs rather than text. All log color settings are ignored when enabled. boolean false"},{"location":"reference/config/#logjsonfields","title":"log.json.fields","text":"Key Description Type Default Value file configures the JSON key containing the calling file string file func Configures the JSON key containing the calling function string func level Configures the JSON key containing the log level string level message Configures the JSON key containing the log message string message timestamp Configures the JSON key containing the timestamp of the log string @timestamp"},{"location":"reference/config/#messagewriter","title":"message.writer","text":"Key Description Type Default Value batchMaxInserts The maximum number of database inserts to include when writing a single batch of messages + data int 200 batchTimeout How long to wait for more messages to arrive before flushing the batch time.Duration 10ms count The number of message writer workers int 5"},{"location":"reference/config/#metrics","title":"metrics","text":"Key Description Type Default Value address Deprecated - use monitoring.address instead int 127.0.0.1 enabled Deprecated - use monitoring.enabled instead boolean true path Deprecated - use monitoring.metricsPath instead string /metrics port Deprecated - use monitoring.port instead int 6000 publicURL Deprecated - use monitoring.publicURL instead URL string <nil> readTimeout Deprecated - use monitoring.readTimeout instead time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout Deprecated - use monitoring.writeTimeout instead time.Duration 15s"},{"location":"reference/config/#metricsauth","title":"metrics.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#metricsauthbasic","title":"metrics.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#metricstls","title":"metrics.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#monitoring","title":"monitoring","text":"Key Description Type Default Value address The IP address on which the metrics HTTP API should listen int 127.0.0.1 enabled Enables the metrics API boolean false metricsPath The path from which to serve the Prometheus metrics string /metrics port The port on which the metrics HTTP API should listen int 6000 publicURL The fully qualified public URL for the metrics API. This is used for building URLs in HTTP responses and in OpenAPI Spec generation URL string <nil> readTimeout The maximum time to wait when reading from an HTTP connection time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout The maximum time to wait when writing to an HTTP connection time.Duration 15s"},{"location":"reference/config/#monitoringauth","title":"monitoring.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#monitoringauthbasic","title":"monitoring.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#monitoringtls","title":"monitoring.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#namespaces","title":"namespaces","text":"Key Description Type Default Value default The default namespace - must be in the predefined list string default predefined A list of namespaces to ensure exists, without requiring a broadcast from the network List string <nil>"},{"location":"reference/config/#namespacespredefined","title":"namespaces.predefined[]","text":"Key Description Type Default Value defaultKey A default signing key for blockchain transactions within this namespace string <nil> description A description for the namespace string <nil> name The name of the namespace (must be unique) string <nil> plugins The list of plugins for this namespace string <nil>"},{"location":"reference/config/#namespacespredefinedassetmanager","title":"namespaces.predefined[].asset.manager","text":"Key Description Type Default Value keyNormalization Mechanism to normalize keys before using them. Valid options are blockchain_plugin - use blockchain plugin (default) or none - do not attempt normalization string <nil>"},{"location":"reference/config/#namespacespredefinedmultiparty","title":"namespaces.predefined[].multiparty","text":"Key Description Type Default Value enabled Enables multi-party mode for this namespace (defaults to true if an org name or key is configured, either here or at the root level) boolean <nil> networknamespace The shared namespace name to be sent in multiparty messages, if it differs from the local namespace name string <nil>"},{"location":"reference/config/#namespacespredefinedmultipartycontract","title":"namespaces.predefined[].multiparty.contract[]","text":"Key Description Type Default Value firstEvent The first event the contract should process. Valid options are oldest or newest string <nil> location A blockchain-specific contract location. For example, an Ethereum contract address, or a Fabric chaincode name and channel string <nil> options Blockchain-specific contract options string <nil>"},{"location":"reference/config/#namespacespredefinedmultipartynode","title":"namespaces.predefined[].multiparty.node","text":"Key Description Type Default Value description A description for the node in this namespace string <nil> name The node name for this namespace string <nil>"},{"location":"reference/config/#namespacespredefinedmultipartyorg","title":"namespaces.predefined[].multiparty.org","text":"Key Description Type Default Value description A description for the local root organization within this namespace string <nil> key The signing key allocated to the root organization within this namespace string <nil> name A short name for the local root organization within this namespace string <nil>"},{"location":"reference/config/#namespacespredefinedtlsconfigs","title":"namespaces.predefined[].tlsConfigs[]","text":"Key Description Type Default Value name Name of the TLS Config string <nil>"},{"location":"reference/config/#namespacespredefinedtlsconfigstls","title":"namespaces.predefined[].tlsConfigs[].tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#namespacesretry","title":"namespaces.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 5s maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#node","title":"node","text":"Key Description Type Default Value description The description of this FireFly node string <nil> name The name of this FireFly node string <nil>"},{"location":"reference/config/#opupdateretry","title":"opupdate.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initialDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#opupdateworker","title":"opupdate.worker","text":"Key Description Type Default Value batchMaxInserts The maximum number of database inserts to include when writing a single batch of messages + data int 200 batchTimeout How long to wait for more messages to arrive before flushing the batch time.Duration 50ms count The number of operation update works int 5 queueLength The size of the queue for the Operation Update worker int 50"},{"location":"reference/config/#orchestrator","title":"orchestrator","text":"Key Description Type Default Value startupAttempts The number of times to attempt to connect to core infrastructure on startup string 5"},{"location":"reference/config/#org","title":"org","text":"Key Description Type Default Value description A description of the organization to which this FireFly node belongs (deprecated - should be set on each multi-party namespace instead) string <nil> key The signing key allocated to the organization (deprecated - should be set on each multi-party namespace instead) string <nil> name The name of the organization to which this FireFly node belongs (deprecated - should be set on each multi-party namespace instead) string <nil>"},{"location":"reference/config/#plugins","title":"plugins","text":"Key Description Type Default Value auth Authorization plugin configuration map[string]string <nil> blockchain The list of configured Blockchain plugins string <nil> database The list of configured Database plugins string <nil> dataexchange The array of configured Data Exchange plugins string <nil> identity The list of available Identity plugins string <nil> sharedstorage The list of configured Shared Storage plugins string <nil> tokens The token plugin configurations string <nil>"},{"location":"reference/config/#pluginsauth","title":"plugins.auth[]","text":"Key Description Type Default Value name The name of the auth plugin to use string <nil> type The type of the auth plugin to use string <nil>"},{"location":"reference/config/#pluginsauthbasic","title":"plugins.auth[].basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#pluginsblockchain","title":"plugins.blockchain[]","text":"Key Description Type Default Value name The name of the configured Blockchain plugin string <nil> type The type of the configured Blockchain Connector plugin string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolver","title":"plugins.blockchain[].ethereum.addressResolver","text":"Key Description Type Default Value alwaysResolve Causes the address resolver to be invoked on every API call that submits a signing key, regardless of whether the input string conforms to an 0x address. Also disables any result caching boolean <nil> bodyTemplate The body go template string to use when making HTTP requests. The template input contains '.Key' and '.Intent' string variables. Go Template string <nil> connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 method The HTTP method to use when making requests to the Address Resolver string GET passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s responseField The name of a JSON field that is provided in the response, that contains the ethereum address (default address) string address retainOriginal When true the original pre-resolved string is retained after the lookup, and passed down to Ethconnect as the from address boolean <nil> tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the Address Resolver string <nil> urlTemplate The URL Go template string to use when calling the Address Resolver. The template input contains '.Key' and '.Intent' string variables. Go Template string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverauth","title":"plugins.blockchain[].ethereum.addressResolver.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverproxy","title":"plugins.blockchain[].ethereum.addressResolver.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the Address Resolver URL string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverretry","title":"plugins.blockchain[].ethereum.addressResolver.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverthrottle","title":"plugins.blockchain[].ethereum.addressResolver.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolvertls","title":"plugins.blockchain[].ethereum.addressResolver.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnect","title":"plugins.blockchain[].ethereum.ethconnect","text":"Key Description Type Default Value batchSize The number of events Ethconnect should batch together for delivery to FireFly core. Only applies when automatically creating a new event stream int 50 batchTimeout How long Ethconnect should wait for new events to arrive and fill a batch, before sending the batch to FireFly core. Only applies when automatically creating a new event stream time.Duration 500 connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s fromBlock The first event this FireFly instance should listen to from the BatchPin smart contract. Default=0. Only affects initial creation of the event stream Address string 0 headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms instance The Ethereum address of the FireFly BatchPin smart contract that has been deployed to the blockchain Address string <nil> maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false prefixLong The prefix that will be used for Ethconnect specific HTTP headers when FireFly makes requests to Ethconnect string firefly prefixShort The prefix that will be used for Ethconnect specific query parameters when FireFly makes requests to Ethconnect string fly requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s topic The websocket listen topic that the node should register on, which is important if there are multiple nodes using a single ethconnect string <nil> url The URL of the Ethconnect instance URL string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectauth","title":"plugins.blockchain[].ethereum.ethconnect.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectbackgroundstart","title":"plugins.blockchain[].ethereum.ethconnect.backgroundStart","text":"Key Description Type Default Value enabled Start the Ethconnect plugin in the background and enter retry loop if failed to start boolean <nil> factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the ethereum plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the ethereum plugin time.Duration 1m"},{"location":"reference/config/#pluginsblockchainethereumethconnectproxy","title":"plugins.blockchain[].ethereum.ethconnect.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to Ethconnect URL string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectretry","title":"plugins.blockchain[].ethereum.ethconnect.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainethereumethconnectthrottle","title":"plugins.blockchain[].ethereum.ethconnect.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnecttls","title":"plugins.blockchain[].ethereum.ethconnect.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectws","title":"plugins.blockchain[].ethereum.ethconnect.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsblockchainethereumfftm","title":"plugins.blockchain[].ethereum.fftm","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the FireFly Transaction Manager runtime, if enabled string <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmauth","title":"plugins.blockchain[].ethereum.fftm.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmproxy","title":"plugins.blockchain[].ethereum.fftm.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the Transaction Manager string <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmretry","title":"plugins.blockchain[].ethereum.fftm.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainethereumfftmthrottle","title":"plugins.blockchain[].ethereum.fftm.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmtls","title":"plugins.blockchain[].ethereum.fftm.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnect","title":"plugins.blockchain[].fabric.fabconnect","text":"Key Description Type Default Value batchSize The number of events Fabconnect should batch together for delivery to FireFly core. Only applies when automatically creating a new event stream int 50 batchTimeout The maximum amount of time to wait for a batch to complete time.Duration 500 chaincode The name of the Fabric chaincode that FireFly will use for BatchPin transactions (deprecated - use fireflyContract[].chaincode) string <nil> channel The Fabric channel that FireFly will use for BatchPin transactions string <nil> connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false prefixLong The prefix that will be used for Fabconnect specific HTTP headers when FireFly makes requests to Fabconnect string firefly prefixShort The prefix that will be used for Fabconnect specific query parameters when FireFly makes requests to Fabconnect string fly requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s signer The Fabric signing key to use when submitting transactions to Fabconnect string <nil> tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s topic The websocket listen topic that the node should register on, which is important if there are multiple nodes using a single Fabconnect string <nil> url The URL of the Fabconnect instance URL string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectauth","title":"plugins.blockchain[].fabric.fabconnect.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectbackgroundstart","title":"plugins.blockchain[].fabric.fabconnect.backgroundStart","text":"Key Description Type Default Value enabled Start the fabric plugin in the background and enter retry loop if failed to start boolean <nil> factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the fabric plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the fabric plugin time.Duration 1m"},{"location":"reference/config/#pluginsblockchainfabricfabconnectproxy","title":"plugins.blockchain[].fabric.fabconnect.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to Fabconnect URL string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectretry","title":"plugins.blockchain[].fabric.fabconnect.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainfabricfabconnectthrottle","title":"plugins.blockchain[].fabric.fabconnect.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnecttls","title":"plugins.blockchain[].fabric.fabconnect.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectws","title":"plugins.blockchain[].fabric.fabconnect.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsblockchaintezosaddressresolver","title":"plugins.blockchain[].tezos.addressResolver","text":"Key Description Type Default Value alwaysResolve Causes the address resolver to be invoked on every API call that submits a signing key. Also disables any result caching boolean <nil> bodyTemplate The body go template string to use when making HTTP requests Go Template string <nil> connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 method The HTTP method to use when making requests to the Address Resolver string GET passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s responseField The name of a JSON field that is provided in the response, that contains the tezos address (default address) string address retainOriginal When true the original pre-resolved string is retained after the lookup, and passed down to Tezosconnect as the from address boolean <nil> tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the Address Resolver string <nil> urlTemplate The URL Go template string to use when calling the Address Resolver. The template input contains '.Key' and '.Intent' string variables. Go Template string <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverauth","title":"plugins.blockchain[].tezos.addressResolver.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverproxy","title":"plugins.blockchain[].tezos.addressResolver.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to connect through string <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverretry","title":"plugins.blockchain[].tezos.addressResolver.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverthrottle","title":"plugins.blockchain[].tezos.addressResolver.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolvertls","title":"plugins.blockchain[].tezos.addressResolver.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnect","title":"plugins.blockchain[].tezos.tezosconnect","text":"Key Description Type Default Value batchSize The number of events Tezosconnect should batch together for delivery to FireFly core. Only applies when automatically creating a new event stream int 50 batchTimeout How long Tezosconnect should wait for new events to arrive and fill a batch, before sending the batch to FireFly core. Only applies when automatically creating a new event stream time.Duration 500 connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false prefixLong The prefix that will be used for Tezosconnect specific HTTP headers when FireFly makes requests to Tezosconnect string firefly prefixShort The prefix that will be used for Tezosconnect specific query parameters when FireFly makes requests to Tezosconnect string fly requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s topic The websocket listen topic that the node should register on, which is important if there are multiple nodes using a single tezosconnect string <nil> url The URL of the Tezosconnect instance URL string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectauth","title":"plugins.blockchain[].tezos.tezosconnect.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectbackgroundstart","title":"plugins.blockchain[].tezos.tezosconnect.backgroundStart","text":"Key Description Type Default Value enabled Start the Tezosconnect plugin in the background and enter retry loop if failed to start boolean <nil> factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the tezos plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the tezos plugin time.Duration 1m"},{"location":"reference/config/#pluginsblockchaintezostezosconnectproxy","title":"plugins.blockchain[].tezos.tezosconnect.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to Tezosconnect URL string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectretry","title":"plugins.blockchain[].tezos.tezosconnect.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchaintezostezosconnectthrottle","title":"plugins.blockchain[].tezos.tezosconnect.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnecttls","title":"plugins.blockchain[].tezos.tezosconnect.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectws","title":"plugins.blockchain[].tezos.tezosconnect.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsdatabase","title":"plugins.database[]","text":"Key Description Type Default Value name The name of the Database plugin string <nil> type The type of the configured Database plugin string <nil>"},{"location":"reference/config/#pluginsdatabasepostgres","title":"plugins.database[].postgres","text":"Key Description Type Default Value maxConnIdleTime The maximum amount of time a database connection can be idle time.Duration 1m maxConnLifetime The maximum amount of time to keep a database connection open time.Duration <nil> maxConns Maximum connections to the database int 50 maxIdleConns The maximum number of idle connections to the database int <nil> url The PostgreSQL connection string for the database string <nil>"},{"location":"reference/config/#pluginsdatabasepostgresmigrations","title":"plugins.database[].postgres.migrations","text":"Key Description Type Default Value auto Enables automatic database migrations boolean false directory The directory containing the numerically ordered migration DDL files to apply to the database string ./db/migrations/postgres"},{"location":"reference/config/#pluginsdatabasesqlite3","title":"plugins.database[].sqlite3","text":"Key Description Type Default Value maxConnIdleTime The maximum amount of time a database connection can be idle time.Duration 1m maxConnLifetime The maximum amount of time to keep a database connection open time.Duration <nil> maxConns Maximum connections to the database int 1 maxIdleConns The maximum number of idle connections to the database int <nil> url The SQLite connection string for the database string <nil>"},{"location":"reference/config/#pluginsdatabasesqlite3migrations","title":"plugins.database[].sqlite3.migrations","text":"Key Description Type Default Value auto Enables automatic database migrations boolean false directory The directory containing the numerically ordered migration DDL files to apply to the database string ./db/migrations/sqlite"},{"location":"reference/config/#pluginsdataexchange","title":"plugins.dataexchange[]","text":"Key Description Type Default Value name The name of the configured Data Exchange plugin string <nil> type The Data Exchange plugin to use string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdx","title":"plugins.dataexchange[].ffdx","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms initEnabled Instructs FireFly to always post all current nodes to the /init API before connecting or reconnecting to the connector boolean false manifestEnabled Determines whether to require+validate a manifest from other DX instances in the network. Must be supported by the connector string false maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the Data Exchange instance URL string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxauth","title":"plugins.dataexchange[].ffdx.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxbackgroundstart","title":"plugins.dataexchange[].ffdx.backgroundStart","text":"Key Description Type Default Value enabled Start the data exchange plugin in the background and enter retry loop if failed to start boolean false factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the data exchange plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the data exchange plugin time.Duration 1m"},{"location":"reference/config/#pluginsdataexchangeffdxeventretry","title":"plugins.dataexchange[].ffdx.eventRetry","text":"Key Description Type Default Value factor The retry backoff factor, for event processing float32 2 initialDelay The initial retry delay, for event processing time.Duration 50ms maxDelay The maximum retry delay, for event processing time.Duration 30s"},{"location":"reference/config/#pluginsdataexchangeffdxproxy","title":"plugins.dataexchange[].ffdx.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the Data Exchange URL string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxretry","title":"plugins.dataexchange[].ffdx.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsdataexchangeffdxthrottle","title":"plugins.dataexchange[].ffdx.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxtls","title":"plugins.dataexchange[].ffdx.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxws","title":"plugins.dataexchange[].ffdx.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsidentity","title":"plugins.identity[]","text":"Key Description Type Default Value name The name of a configured Identity plugin string <nil> type The type of a configured Identity plugin string <nil>"},{"location":"reference/config/#pluginssharedstorage","title":"plugins.sharedstorage[]","text":"Key Description Type Default Value name The name of the Shared Storage plugin to use string <nil> type The Shared Storage plugin to use string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapi","title":"plugins.sharedstorage[].ipfs.api","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL for the IPFS API URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapiauth","title":"plugins.sharedstorage[].ipfs.api.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapiproxy","title":"plugins.sharedstorage[].ipfs.api.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the IPFS API URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapiretry","title":"plugins.sharedstorage[].ipfs.api.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginssharedstorageipfsapithrottle","title":"plugins.sharedstorage[].ipfs.api.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapitls","title":"plugins.sharedstorage[].ipfs.api.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgateway","title":"plugins.sharedstorage[].ipfs.gateway","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL for the IPFS Gateway URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewayauth","title":"plugins.sharedstorage[].ipfs.gateway.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewayproxy","title":"plugins.sharedstorage[].ipfs.gateway.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the IPFS Gateway URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewayretry","title":"plugins.sharedstorage[].ipfs.gateway.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginssharedstorageipfsgatewaythrottle","title":"plugins.sharedstorage[].ipfs.gateway.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewaytls","title":"plugins.sharedstorage[].ipfs.gateway.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginstokens","title":"plugins.tokens[]","text":"Key Description Type Default Value broadcastName The name to be used in broadcast messages related to this token plugin, if it differs from the local plugin name string <nil> name A name to identify this token plugin string <nil> type The type of the token plugin to use string <nil>"},{"location":"reference/config/#pluginstokensfftokens","title":"plugins.tokens[].fftokens","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the token connector URL string <nil>"},{"location":"reference/config/#pluginstokensfftokensauth","title":"plugins.tokens[].fftokens.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginstokensfftokensbackgroundstart","title":"plugins.tokens[].fftokens.backgroundStart","text":"Key Description Type Default Value enabled Start the tokens plugin in the background and enter retry loop if failed to start boolean false factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the token plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the token plugin time.Duration 1m"},{"location":"reference/config/#pluginstokensfftokenseventretry","title":"plugins.tokens[].fftokens.eventRetry","text":"Key Description Type Default Value factor The retry backoff factor, for event processing float32 2 initialDelay The initial retry delay, for event processing time.Duration 50ms maxDelay The maximum retry delay, for event processing time.Duration 30s"},{"location":"reference/config/#pluginstokensfftokensproxy","title":"plugins.tokens[].fftokens.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the token connector URL string <nil>"},{"location":"reference/config/#pluginstokensfftokensretry","title":"plugins.tokens[].fftokens.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginstokensfftokensthrottle","title":"plugins.tokens[].fftokens.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginstokensfftokenstls","title":"plugins.tokens[].fftokens.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginstokensfftokensws","title":"plugins.tokens[].fftokens.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#privatemessagingbatch","title":"privatemessaging.batch","text":"Key Description Type Default Value agentTimeout How long to keep around a batching agent for a sending identity before disposal time.Duration 2m payloadLimit The maximum payload size of a private message Data Exchange payload BytesSize 800Kb size The maximum number of messages in a batch for private messages int 200 timeout The timeout to wait for a batch to fill, before sending time.Duration 1s"},{"location":"reference/config/#privatemessagingretry","title":"privatemessaging.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 100ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#spi","title":"spi","text":"Key Description Type Default Value address The IP address on which the admin HTTP API should listen IP Address string 127.0.0.1 enabled Enables the admin HTTP API boolean false port The port on which the admin HTTP API should listen int 5001 publicURL The fully qualified public URL for the admin API. This is used for building URLs in HTTP responses and in OpenAPI Spec generation URL string <nil> readTimeout The maximum time to wait when reading from an HTTP connection time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout The maximum time to wait when writing to an HTTP connection time.Duration 15s"},{"location":"reference/config/#spiauth","title":"spi.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#spiauthbasic","title":"spi.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#spitls","title":"spi.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#spiws","title":"spi.ws","text":"Key Description Type Default Value blockedWarnInterval How often to log warnings in core, when an admin change event listener falls behind the stream they requested and misses events time.Duration 1m eventQueueLength Server-side queue length for events waiting for delivery over an admin change event listener websocket int 250 readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#subscription","title":"subscription","text":"Key Description Type Default Value max The maximum number of pre-defined subscriptions that can exist (note for high fan-out consider connecting a dedicated pub/sub broker to the dispatcher) int 500"},{"location":"reference/config/#subscriptiondefaults","title":"subscription.defaults","text":"Key Description Type Default Value batchSize Default read ahead to enable for subscriptions that do not explicitly configure readahead int 50 batchTimeout Default batch timeout int 50ms"},{"location":"reference/config/#subscriptionevents","title":"subscription.events","text":"Key Description Type Default Value maxScanLength The maximum number of events a search for historical events matching a subscription will index from the database int 1000"},{"location":"reference/config/#subscriptionretry","title":"subscription.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#transactionwriter","title":"transaction.writer","text":"Key Description Type Default Value batchMaxTransactions The maximum number of transaction inserts to include in a batch int 100 batchTimeout How long to wait for more transactions to arrive before flushing the batch time.Duration 10ms count The number of message writer workers int 5"},{"location":"reference/config/#ui","title":"ui","text":"Key Description Type Default Value enabled Enables the web user interface boolean true path The file system path which contains the static HTML, CSS, and JavaScript files for the user interface string <nil>"},{"location":"reference/events/","title":"Event Bus","text":""},{"location":"reference/events/#hyperledger-firefly-event-bus","title":"Hyperledger FireFly Event Bus","text":"The FireFly event bus provides your application with a single stream of events from all of the back-end services that plug into FireFly.
Applications subscribe to these events using developer friendly protocols like WebSockets, and Webhooks. Additional transports and messaging systems like NATS, Kafka, and JMS Servers can be connected through plugins.
Each application creates one or more Subscriptions to identify itself. In this subscription the application can choose to receive all events that are emitted within a namespace, or can use server-side filtering to only receive a sub-set of events.
The event bus reliably keeps track of which events have been delivered to which applications, via an offset into the main event stream that is updated each time an application acknowledges receipt of events over its subscription.
Decentralized applications are built around a source of truth that is shared between multiple parties. No one party can change the state unilaterally, as their changes need to be processed in order with the other changes in the system. Each party processes requests to change shared state in the same order, against a common set of rules for what is allowed at that exact point in the processing. As a result everybody deterministically ends up with the same state at the end of the processing.
This requires an event-driven programming model.
You will find an event-driven model at the core of every blockchain Smart Contract technology.
This event-driven approach is unavoidable regardless of how much of your business data & logic can be directly stored/processed on-chain, vs. off-chain.
So Hyperledger FireFly aims to provide you with the tools to easily manage this model throughout your decentralized application stack.
Your back-end application should be structured for this event-driven paradigm, with an Event Handler constantly listening for events, applying a consistent State Machine to those events and applying the changes to your Application Database.
FireFly comes with a built in event processor for Token transfers & approvals, that implements this pattern to maintain balances, and transaction history in a rich query off-chain data cache.
"},{"location":"reference/events/#decentralized-event-processing","title":"Decentralized Event Processing","text":"In a decentralized system, you need to consider that each organization runs its own applications, and has its own private database.
At any given point in time different organizations will have slightly different views of what the most up to date information is - even for the blockchain state.
As well as the agreed business logic, there will be private data and core system integration that are needed to process events as they happen. Some of this data might be received privately from other parties, over a secure communications channel (not the blockchain).
The system must be eventually consistent across all parties for any business data/decision that those parties need to agree on. This happens by all parties processing the same events in the same order, and by applying the same business logic (for the parts of the business logic that are agreed).
This means that when processing an event, a participant must have access to enough historical data/state to reach the same conclusion as everyone else.
Let's look at a couple of examples.
"},{"location":"reference/events/#example-1-a-fungible-token-balance-transfer","title":"Example 1: A fungible token balance transfer","text":"You need to be able to verify the complete lineage of the tokens being spent, in order to know that they cannot be double spent anywhere in the network.
This means the transaction must be backed by a blockchain verifiable by all participants on the network that could hold balances of that token.
You might be able to use advanced cryptography (such as zero-knowledge proofs) to mask the participants in the trade, but the transaction themselves must be verifiable to everyone in a global sequence that prevents double spending.
"},{"location":"reference/events/#example-2-a-step-in-a-multi-party-business-process","title":"Example 2: A step in a multi-party business process","text":"Here it is likely you want to restrict visibility of the data to just the parties directly involved in the business process.
To come to a common agreement on outcome, the parties must know they are processing the same data in the same order. So at minimum a proof (a hash of the data) needs to \"pinned\" to a blockchain ledger visible to all participants involved in the process.
You can then choose to put more processing on the blockchain, to enforce some critical rules in the business state machine that must be executed fairly to prevent one party from cheating the system. Such as that the highest bid is chosen in a competitive bidding process, or a minimum set of parties have voted agreement before a transaction is finalized.
Other steps in the process might include human decision making, private data from the core systems of one member, or proprietary business logic that one member is not willing to share. These steps are \"non-deterministic\" - you cannot predict the outcome, nor be guaranteed to reproduce the same outcome with the same inputs in the future.
The FireFly event bus is designed to make triggering these non-deterministic steps easy, while still allowing them to be part of the overall state machine of the business process. You need to take care that the system is designed so parties cannot cheat, and must follow the rules. How much of that rule enforcement needs to be executed on-chain vs. off-chain (backed by a deterministic order through the blockchain) is different for each use case.
Remember that tokens provide a great set of building blocks for on-chain steps in your decentralized applications. Enterprise NFTs allow generation of a globally unique ID, and track ownership. Fungible tokens allow value transfer, and can be extended with smart contracts that to lock/unlock funds in \"digital escrow\" while complex off-chain agreement happens.
"},{"location":"reference/events/#privacy-groups-and-late-join","title":"Privacy groups and late join","text":"If a new participant needs to join into a business transaction that has already started, they must first \"catch up\" with the current state before they can play their part. In a real-world scenario they might not be allowed to see all the data that's visible to the other parties, so it is common to create a new stream of communications that includes all of the existing parties, plus the new party, to continue the process.
If you use the same blockchain to back both groups, then you can safely order business process steps that involve different parties across these overlapping groups of participants.
Using a single Ethereum permissioned side-chain for example.
Alternatively, you can create dedicated distributed ledgers (DLTs) for communication between these groups of participants. This can allow more logic and data to go on-chain directly, although you still must consider the fact that this data is immutable and can never be deleted.
Using Hyperledger Fabric channels for example.
On top of either type of ledger, FireFly provides a private Group construct to facilitate secure off-chain data exchanges, and to efficiently pin these communications to the blockchain in batches.
These private data exchanges can also be coordinated with most sophisticated on-chain transactions, such as token transfers.
"},{"location":"reference/events/#event-types","title":"Event Types","text":"FireFly provides a number of different types of events to your application, designed to allow you to build your application state machine quickly and reliably.
All events in FireFly share a common base structure, regardless of their type. They are then linked (via a reference) to an object that contains detailed information.
The categories of event your application can receive are as follows:
See the Core Resources/Event page for a full list of event types, and more details on the data you can expect for each type.
"},{"location":"reference/events/#blockchain-events","title":"Blockchain events","text":"FireFly allows your application to subscribe to any event from a blockchain smart contract.
In order for applications connected to the FireFly API to receive blockchain events from a smart contracts, a ContractListener fist must be created to instruct FireFly to listen to those events from the blockchain (via the blockchain plugin).
Once you have configured the blockchain event listener, every event detected from the blockchain will result in a FireFly event delivered to your application of type blockchain_event_received.
As of 1.3.1 a group of event filters can be established under a single topic when supported by the connector, which has benefits for ordering. See Contract Listeners for more detail
Check out the Custom Contracts Tutorial for a walk-through of how to set up listeners for the events from your smart contracts.
FireFly automatically establishes listeners for some blockchain events:
Events from the FireFly BatchPin contract that is used to pin identities, off-chain data broadcast and private messaging to the blockchain.
Events from Token contracts, for which a Token Pool has been configured. These events are detected indirectly via the token connector.
FireFly provides a Wallet API, that is pluggable to multiple token implementations without needing to change your app.
The pluggable API/Event interface allows all kinds of technical implementations of tokens to be fitted into a common framework.
The following wallet operations are supported. These are universal to all token implementations - NFTs and fungible tokens alike:
FireFly processes, indexes and stores the events associated with these actions, for any Token Pool that has been configured on the FireFly node.
See Token Transfer and Token Approval for more information on the individual operations.
The token connector is responsible for mapping from the raw Blockchain Events, to the FireFly model for tokens. Reference token connector implementations are provided for common interface standards implemented by tokens - like ERC-20, ERC-721 and ERC-115.
A particular token contract might have many additional features that are unique to that contract, particularly around governance. For these you would use the Smart Contract features of FireFly to interact with the blockchain API and Events directly.
"},{"location":"reference/events/#message-events-on-chain-off-chain-coordinated","title":"Message events: on-chain / off-chain coordinated","text":"Event aggregation between data arriving off-chain, and the associated ordered proof/transaction events being confirmed on-chain, is a complex orchestration task.
The universal order and additional transaction logic on-chain must be the source of truth for when and how an event is processed.
However, that event cannot be processed until the off-chain private/broadcast data associated with that event is also available and verified against the on-chain hash of that additional data.
They might arrive in any order, and no further events can be processed on that business transaction until the data is available.
Multiple parties might be emitting events as part of the business transaction, and the outcome will only be assured to be the same by all parties if they process these events in the same order.
Hyperledger FireFly handles this for you. Events related to a message are not emitted until both the on-chain and off-chain parts (including large binary attachments) are available+verified in your local FireFly node, and all previous messages on the same topic have been processed successfully by your application.
Your application just needs to:
topic for your messages that determines the ordered stream it is part of. Such as a business transaction identifier.See Message for more information
"},{"location":"reference/events/#transaction-submission-events","title":"Transaction submission events","text":"These events are emitted each time a new transaction is initiated via the Firefly API.
These events are only emitted on the local FireFly node that initiates an activity.
For more information about FireFly Transactions, and how they relate to blockchain transactions, see Transaction.
"},{"location":"reference/firefly_interface_format/","title":"FireFly Interface Format","text":"FireFly defines a common, blockchain agnostic way to describe smart contracts. This is referred to as a Contract Interface, and it is written in the FireFly Interface (FFI) format. It is a simple JSON document that has a name, a namespace, a version, a list of methods, and a list of events.
"},{"location":"reference/firefly_interface_format/#overview","title":"Overview","text":"There are four required fields when broadcasting a contract interface in FireFly: a name, a version, a list of methods, and a list of events. A namespace field will also be filled in automatically based on the URL path parameter. Here is an example of the structure of the required fields:
{\n \"name\": \"example\",\n \"version\": \"v1.0.0\",\n \"methods\": [],\n \"events\": []\n}\n NOTE: Contract interfaces are scoped to a namespace. Within a namespace each contract interface must have a unique name and version combination. The same name and version combination can exist in different namespaces simultaneously.
"},{"location":"reference/firefly_interface_format/#method","title":"Method","text":"Let's look at a what goes inside the methods array now. It is also a JSON object that has a name, a list of params which are the arguments the function will take and a list of returns which are the return values of the function. It also has an optional description which can be helpful in OpenAPI Spec generation. Finally, it has an optional details object which wraps blockchain specific information about this method. This can be used by the blockchain plugin when invoking this function, and it is also used in documentation generation.
{\n \"name\": \"add\",\n \"description\": \"Add two numbers together\",\n \"params\": [],\n \"returns\": [],\n \"details\": {}\n}\n"},{"location":"reference/firefly_interface_format/#event","title":"Event","text":"What goes into the events array is very similar. It is also a JSON object that has a name and a list of params. The difference is that events don't have returns. Arguments that are passed to the event when it is emitted are in params. It also has an optional description which can be helpful in OpenAPI Spec generation. Finally, it has an optional details object which wraps blockchain specific information about this event. This can be used by the blockchain plugin when invoking this function, and it is also used in documentation generation.
{\n \"name\": \"added\",\n \"description\": \"An event that occurs when numbers have been added\",\n \"params\": [],\n \"details\": {}\n}\n"},{"location":"reference/firefly_interface_format/#param","title":"Param","text":"Both methods, and events have lists of params or returns, and the type of JSON object that goes in each of these arrays is the same. It is simply a JSON object with a name and a schema. There is also an optional details field that is passed to the blockchain plugin for blockchain specific requirements.
{\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {}\n }\n}\n"},{"location":"reference/firefly_interface_format/#schema","title":"Schema","text":"The param schema is an important field which tells FireFly the type information about this particular field. This is used in several different places, such as OpenAPI Spec generation, API request validation, and blockchain request preparation.
The schema field accepts JSON Schema (version 2020-12) with several additional requirements:
type field is always mandatorybooleanintegerstringobjectarrayNOTE: Floats or decimals are not currently accepted because certain underlying blockchains (e.g. Ethereum) only allow integers
The type field here is the JSON input type when making a request to FireFly to invoke or query a smart contract. This type can be different from the actual blockchain type, usually specified in the details field, if there is a compatible type mapping between the two.
The details field is quite important in some cases. Because the details field is passed to the blockchain plugin, it is used to encapsulate blockchain specific type information about a particular field. Additionally, because each blockchain plugin can add rules to the list of schema requirements above, a blockchain plugin can enforce that certain fields are always present within the details field.
For example, the Ethereum plugin always needs to know what Solidity type the field is. It also defines several optional fields. A full Ethereum details field may look like:
{\n \"type\": \"uint256\",\n \"internalType\": \"uint256\",\n \"indexed\": false\n}\n"},{"location":"reference/firefly_interface_format/#automated-generation-of-firefly-interfaces","title":"Automated generation of FireFly Interfaces","text":"A convenience endpoint exists on the API to facilitate converting from native blockchain interface formats such as an Ethereum ABI to the FireFly Interface format. For details, please see the API documentation for the contract interface generation endpoint.
For an example of using this endpoint with a specific Ethereum contract, please see the Tutorial to Work with custom smart contracts.
"},{"location":"reference/firefly_interface_format/#full-example","title":"Full Example","text":"Putting it all together, here is a full example of the FireFly Interface format with all the fields filled in:
{\n \"namespace\": \"default\",\n \"name\": \"SimpleStorage\",\n \"description\": \"A simple smart contract that stores and retrieves an integer on-chain\",\n \"version\": \"v1.0.0\",\n \"methods\": [\n {\n \"name\": \"get\",\n \"description\": \"Retrieve the value of the stored integer\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"output\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"details\": {\n \"stateMutability\": \"viewable\"\n }\n },\n {\n \"name\": \"set\",\n \"description\": \"Set the stored value on-chain\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": [],\n \"details\": {\n \"stateMutability\": \"payable\"\n }\n }\n ],\n \"events\": [\n {\n \"name\": \"Changed\",\n \"description\": \"An event that is fired when the stored integer value changes\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"details\": {}\n }\n ]\n}\n"},{"location":"reference/idempotency/","title":"Idempotency Keys","text":""},{"location":"reference/idempotency/#idempotency","title":"Idempotency","text":"The transaction submission REST APIs of Hyperledger FireFly are idempotent.
Idempotent APIs allow an application to safely submit a request multiple times, and for the transaction to only be accepted and executed once.
This is the well accepted approach for REST APIs over HTTP/HTTPS to achieve resilience, as HTTP requests can fail in indeterminate ways. For example in a request or gateway timeout situation, the requester is unable to know whether the request will or will not eventually be processed.
There are various types of FireFly transaction that can be submitted. These include direct submission of blockchain transactions to a smart contract, as well as more complex transactions including coordination of multiple operations across on-chain and off-chain connectors.
In order for Hyperledger FireFly to deduplicate transactions, and make them idempotent, the application must supply an idempotencyKey on each API request.
The caller of the API specifies its own unique identifier (an arbitrary string up to 256 characters) that uniquely identifies the request, in the idempotencyKey field of the API.
So if there is a network connectivity failure, or an abrupt termination of either runtime, the application can safely attempt to resubmit the REST API call and be returned a 409 Conflict HTTP code.
Examples of how an app might construct such an idempotencyKey include:
Be careful of cases where the business data might not be unique - like a transfer of 10 coins from A to B.
Such a transfer could happen multiple times, and each would be a separate business transaction.
Where as transfer with invoice number abcd1234 of 10 coins from A to B would be assured to be unique.
This moves the challenge up one layer into your application. How does that unique ID get generated? Is that itself idempotent?
FireFly provides an idempotent interface downstream to connectors.
Each operation within a FireFly transaction receives a unique ID within the overall transaction that is used as an idempotency key when invoking that connector.
Well formed connectors honor this idempotency key internally, ensuring that the end-to-end transaction submission is idempotent.
Key examples of such connectors are EVMConnect and others built on the Blockchain Connector Toolkit.
When an operation is retried automatically, the same idempotency key is re-used to avoid resubmission.
"},{"location":"reference/idempotency/#short-term-retry","title":"Short term retry","text":"The FireFly core uses standard HTTP request code to communicate with all connector APIs.
This code include exponential backoff retry, that can be enabled with a simple boolean in the plugin of FireFly core. The minimum retry, maximum retry, and backoff factor can be tuned individually as well on each connector.
See Configuration Reference for more information.
"},{"location":"reference/idempotency/#administrative-operation-retry","title":"Administrative operation retry","text":"The operations/{operationId}/retry API can be called administratively to resubmit a transaction that has reached Failed status, or otherwise been determined by an operator/monitor to be unrecoverable within the connector.
In this case, the previous operation is marked Retried, a new operation ID is allocated, and the operation is re-submitted to the connector with this new ID.
Identities are a critical part of using FireFly in a multi-party system. Every party that joins a multi-party system must begin by claiming an on- and off-chain identity, which is described with a unique DID. Each type of identity is also associated with an on- or off-chain verifier, which can be used in some way to check the authorship of a piece of data. Together, these concepts form the backbone of the trust model for exchanging multi-party data.
"},{"location":"reference/identities/#types-of-identities","title":"Types of Identities","text":"There are three types of identities:
"},{"location":"reference/identities/#org","title":"org","text":"Organizations are the primary identity type in FireFly. They represent a logical on-chain signing identity, and the attached verifier is therefore a blockchain key (with the exact format depending on the blockchain being used). Every party in a multi-party system must claim a root organization identity as the first step to joining the network.
The root organization name and key must be defined in the FireFly config (once for every multi-party system). It can be claimed with a POST to /network/organizations/self.
Organizations may have child identities of any type.
"},{"location":"reference/identities/#node","title":"node","text":"Nodes represent a logical off-chain identity - and specifically, they are tied to an instance of a data exchange connector. The format of the attached verifier depends on the data exchange plugin being used, but it will be mapped to some validation provided by that plugin (ie the name of an X.509 certificate or similar). Every party in a multi-party system must claim a node identity when joining the network, which must be a child of one of its organization identities (but it is possible for many nodes to share a parent organization).
The node name must be defined in the FireFly config (once for every multi-party system). It can be claimed with a POST to /network/nodes/self.
Nodes must be a child of an organization, and cannot have any child identities of their own.
Note that \"nodes\" as an identity concept are distinct from FireFly supernodes, from underlying blockchain nodes, and from anywhere else the term \"node\" happens to be used.
"},{"location":"reference/identities/#custom","title":"custom","text":"Custom identities are similar to organizations, but are provided for applications to define their own more granular notions of identity. They are associated with an on-chain verifier in the same way as organizations.
They can only have child identities which are also of type \"custom\".
"},{"location":"reference/identities/#identity-claims","title":"Identity Claims","text":"Before an identity can be used within a multi-party system, it must be claimed. The identity claim is a special type of broadcast message sent by FireFly to establish an identity uniquely among the parties in the multi-party system. As with other broadcasts, this entails an on-chain transaction which contains a public reference to an off-chain piece of data (such as an IPFS reference) describing the details of the identity claim.
The claim data consists of information on the identity being claimed - such as the type, the DID, and the parent (if applicable). The DID must be unique and unclaimed. The verifier will be inferred from the message - for on-chain identities (org and custom), it is the blockchain key that was used to sign the on-chain portion of the message, while for off-chain identities (nodes), is is an identifier queried from data exchange.
For on-chain identities with a parent, two messages are actually required - the claim message signed with the new identity's blockchain key, as well as a separate verification message signed with the parent identity's blockchain key. Both messages must be received before the identity is confirmed.
"},{"location":"reference/identities/#messaging","title":"Messaging","text":"In the context of a multi-party system, FireFly provides capabilities for sending off-chain messages that are pinned to an on-chain proof. The sender of every message must therefore have an on-chain and off-chain identity. For private messages, every recipient must also have an on-chain and off-chain identity.
"},{"location":"reference/identities/#sender","title":"Sender","text":"When sending a message, the on-chain identity of the sender is controlled by the author and key fields.
author alone is specified, it should be the DID of an org or custom identity. The associated verifier will be looked up to use as the key.key alone is specified, it must match the registered blockchain verifier for an org or custom identity that was previously claimed. A reverse lookup will be used to populate the DID for the author.author and key are both specified, they will be used as-is (can be used to send private messages with an unregistered blockchain key).The resolved key will be used to sign the blockchain transaction, which establishes the sender's on-chain identity.
The sender's off-chain identity is always controlled by the node.name from the config along with the data exchange plugin.
When specifying private message recipients, each one has an identity and a node.
identity alone is specified, it should be the DID of an org or custom identity. The first node owned by that identity or one of its ancestors will be automatically selected.identity and node are specified, they will be used as-is. The node should be a child of the given identity or one of its ancestors.The node in this case will control how the off-chain portion of the message is routed via data exchange.
When a message is received, FireFly verifies the following:
author and key are specified in the message. The author must be a known org or custom identity. The key must match the blockchain key that was used to sign the on-chain portion of the message. For broadcast messages, the key must match the registered verifier for the author.node (as reported by data exchange) must be a known node identity which is a child of the message's author identity or one of its ancestors. The combination of the author identity and the node must also be found in the message group.In addition, the data exchange plugin is responsible for verifying the sending and receiving identities for the off-chain data (such as validating the relevant certificates).
"},{"location":"reference/namespaces/","title":"Namespaces","text":""},{"location":"reference/namespaces/#introduction-to-namespaces","title":"Introduction to Namespaces","text":"Namespaces are a construct for segregating data and operations within a FireFly supernode. Each namespace is an isolated environment within a FireFly runtime, that allows independent configuration of:
They can be thought of in two basic modes:
"},{"location":"reference/namespaces/#multi-party-namespaces","title":"Multi-party Namespaces","text":"This namespace is shared with one or more other FireFly nodes. It requires three types of communication plugins - blockchain, data exchange, and shared storage. Organization and node identities must be claimed with an identity broadcast when joining the namespace, which establishes credentials for blockchain and off-chain communication. Shared objects can be defined in the namespace (such as datatypes and token pools), and details of them will be implicitly broadcast to other members.
This type of namespace is used when multiple parties need to share on- and off-chain data and agree upon the ordering and authenticity of that data. For more information, see the multi-party system overview.
"},{"location":"reference/namespaces/#gateway-namespaces","title":"Gateway Namespaces","text":"Nothing in this namespace will be shared automatically, and no assumptions are made about whether other parties connected through this namespace are also using Hyperledger FireFly. Plugins for data exchange and shared storage are not supported. If any identities or definitions are created in this namespace, they will be stored in the local database, but will not be shared implicitly outside the node.
This type of namespace is mainly used when interacting directly with a blockchain, without assuming that the interaction needs to conform to FireFly's multi-party system model.
"},{"location":"reference/namespaces/#configuration","title":"Configuration","text":"FireFly nodes can be configured with one or many namespaces of different modes. This means that a single FireFly node can be used to interact with multiple distinct blockchains, multiple distinct token economies, and multiple business networks.
Below is an example plugin and namespace configuration containing both a multi-party and gateway namespace:
plugins:\n database:\n - name: database0\n type: sqlite3\n sqlite3:\n migrations:\n auto: true\n url: /etc/firefly/db?_busy_timeout=5000\n blockchain:\n - name: blockchain0\n type: ethereum\n ethereum:\n ethconnect:\n url: http://ethconnect_0:8080\n topic: \"0\"\n - name: blockchain1\n type: ethereum\n ethereum:\n ethconnect:\n url: http://ethconnect_01:8080\n topic: \"0\"\n dataexchange:\n - name: dataexchange0\n type: ffdx\n ffdx:\n url: http://dataexchange_0:3000\n sharedstorage:\n - name: sharedstorage0\n type: ipfs\n ipfs:\n api:\n url: http://ipfs_0:5001\n gateway:\n url: http://ipfs_0:8080\n tokens:\n - name: erc20_erc721\n broadcastName: erc20_erc721\n type: fftokens\n fftokens:\n url: http://tokens_0_0:3000\nnamespaces:\n default: alpha\n predefined:\n - name: alpha\n description: Default predefined namespace\n defaultKey: 0x123456\n plugins: [database0, blockchain0, dataexchange0, sharedstorage0, erc20_erc721]\n multiparty:\n networkNamespace: alpha\n enabled: true\n org:\n name: org0\n description: org0\n key: 0x123456\n node:\n name: node0\n description: node0\n contract:\n - location:\n address: 0x4ae50189462b0e5d52285f59929d037f790771a6\n firstEvent: 0\n - location:\n address: 0x3c1bef20a7858f5c2f78bda60796758d7cafff27\n firstEvent: 5000\n - name: omega\n defaultkey: 0x48a54f9964d7ceede2d6a8b451bf7ad300c7b09f\n description: Gateway namespace\n plugins: [database0, blockchain1, erc20_erc721]\n The namespaces.predefined object contains the follow sub-keys:
defaultKey is a blockchain key used to sign transactions when none is specified (in multi-party mode, defaults to the org key)plugins is an array of plugin names to be activated for this namespace (defaults to all available plugins if omitted)multiparty.networkNamespace is the namespace name to be sent in plugin calls, if it differs from the locally used name (useful for interacting with multiple shared namespaces of the same name - defaults to the value of name)multiparty.enabled controls if multi-party mode is enabled (defaults to true if an org key or org name is defined on this namespace or in the deprecated org section at the root)multiparty.org is the root org identity for this multi-party namespace (containing name, description, and key)multiparty.node is the local node identity for this multi-party namespace (containing name and description)multiparty.contract is an array of objects describing the location(s) of a FireFly multi-party smart contract. Its children are blockchain-agnostic location and firstEvent fields, with formats identical to the same fields on custom contract interfaces and contract listeners. The blockchain plugin will interact with the first contract in the list until instructions are received to terminate it and migrate to the next.name must be unique on this nodename or multiparty.networkNamespacedatabase plugin is required for every namespacemultiparty.enabled is true, plugins must include one each of blockchain, dataexchange, and sharedstoragemultiparty.enabled is false, plugins must not include dataexchange or sharedstorageAll namespaces must be called out in the FireFly config file in order to be valid. Namespaces found in the database but not represented in the config file will be ignored.
"},{"location":"reference/namespaces/#definitions","title":"Definitions","text":"In FireFly, definitions are immutable payloads that are used to define identities, datatypes, smart contract interfaces, token pools, and other constructs. Each type of definition in FireFly has a schema that it must adhere to. Some definitions also have a name and a version which must be unique within a namespace. In a multiparty namespace, definitions are broadcasted to other organizations.
"},{"location":"reference/namespaces/#local-definitions","title":"Local Definitions","text":"The following are all \"definition\" types in FireFly:
For gateway namespaces, the APIs which create these definitions will become an immediate local database insert, instead of performing a broadcast. Additional caveats:
To enable TLS in Firefly, there is a configuration available to provide certificates and keys.
The common configuration is as such:
tls:\n enabled: true/false # Toggle on or off TLS\n caFile: <path to the CA file you want the client or server to trust>\n certFile: <path to the cert file you want the client or server to use when performing authentication in mTLS>\n keyFile: <path to the priavte key file you want the client or server to use when performing authentication in mTLS>\n clientAuth: true/false # Only applicable to the server side, to toggle on or off client authentication\n requiredDNAttributes: A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes)\n NOTE The CAs, certificates and keys have to be in PEM format.
"},{"location":"reference/tls/#configuring-tls-for-the-api-server","title":"Configuring TLS for the API server","text":"Using the above configuration, we can place it under the http config and enable TLS or mTLS for any API call.
See this config section for details
"},{"location":"reference/tls/#configuring-tls-for-the-webhooks","title":"Configuring TLS for the webhooks","text":"Using the above configuration, we can place it under the events.webhooks config and enable TLS or mTLS for any webhook call.
See this config section for details
"},{"location":"reference/tls/#configuring-clients-and-websockets","title":"Configuring clients and websockets","text":"Firefly has a set of HTTP clients and websockets that communicate the external endpoints and services that could be secured using TLS. In order to configure these clients, we can use the same configuration as above in the respective places in the config which relate to those clients.
For example, if you wish to configure the ethereum blockchain connector with TLS you would look at this config section
For more clients, search in the configuration reference for a TLS section.
"},{"location":"reference/tls/#enhancing-validation-of-certificates","title":"Enhancing validation of certificates","text":"In the case where we want to verify that a specific client certificate has certain attributes we can use the requiredDNAtributes configuration as described above. This will allow you by the means of a regex expresssion matching against well known distinguished names (DN). To learn more about a DNs look at this document
fftokens is a protocol that can be implemented by token connector runtimes in order to be usable by the fftokens plugin in FireFly.
The connector runtime must expose an HTTP and websocket server, along with a minimum set of HTTP APIs and websocket events. Each connector will be strongly coupled to a specific ledger technology and token standard(s), but no assumptions are made in the fftokens spec about what these technologies must be, as long as they can satisfy the basic requirements laid out here.
Note that this is an internal protocol in the FireFly ecosystem - application developers working against FireFly should never need to care about or directly interact with a token connector runtime. The audience for this document is only developers interested in creating new token connectors (or editing/forking existing ones).
Two implementations of this specification have been created to date (both based on common Ethereum token standards) - firefly-tokens-erc1155 and firefly-tokens-erc20-erc721.
"},{"location":"reference/microservices/fftokens/#http-apis","title":"HTTP APIs","text":"This is the minimum set of APIs that must be implemented by a conforming token connector. A connector may choose to expose other APIs for its own purposes. All requests and responses to the APIs below are encoded as JSON. The APIs are currently understood to live under a /api/v1 prefix.
POST /createpool","text":"Create a new token pool. The exact meaning of this is flexible - it may mean invoking a contract or contract factory to actually define a new set of tokens via a blockchain transaction, or it may mean indexing a set of tokens that already exists (depending on the options a connector accepts in config).
In a multiparty network, this operation will only be performed by one of the parties, and FireFly will broadcast the result to the others.
FireFly will store a \"pending\" token pool after a successful creation, but will replace it with a \"confirmed\" token pool after a successful activation (see below).
Request
{\n \"type\": \"fungible\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"namespace\": \"default\",\n \"name\": \"FFCoin\",\n \"symbol\": \"FFC\",\n \"data\": \"pool-metadata\",\n \"requestId\": \"1\",\n \"config\": {}\n}\n Parameter Type Description type string enum The type of pool to create. Currently supported types are \"fungible\" and \"nonfungible\". It is recommended (but not required) that token connectors support both. Unrecognized/unsupported types should be rejected with HTTP 400. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. namespace string The namespace of the token pool name string (OPTIONAL) If supported by this token contract, this is a requested name for the token pool. May be ignored at the connector's discretion. symbol string (OPTIONAL) If supported by this token contract, this is a requested symbol for the token pool. May be ignored at the connector's discretion. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this creation request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the token pool is created. Response
HTTP 200: pool creation was successful, and the pool details are returned in the response.
See Response Types: Token Pool
HTTP 202: request was accepted, but pool will be created asynchronously, with \"receipt\" and \"token-pool\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-activatepool","title":"POST /activatepool","text":"Activate a token pool to begin receiving events. Generally this means the connector will create blockchain event listeners for transfer and approval events related to the set of tokens encompassed by this token pool.
In a multiparty network, this step will be performed by every member after a successful token pool broadcast. It therefore also serves the purpose of validating the broadcast info - if the connector does not find a valid pool given the poolLocator and config information passed in to this call, the pool should not get confirmed.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"config\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string (OPTIONAL) A data string that should be permanently attached to this pool and returned in all events. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. This should be the same config object that was passed when the pool was created. Response
HTTP 200: pool activation was successful, and the pool details are returned in the response.
See Response Types: Token Pool
HTTP 202: request was accepted, but pool will be activated asynchronously, with \"receipt\" and \"token-pool\" events sent later on the websocket.
See Response Types: Async Request
HTTP 204: activation was successful - no separate receipt will be delivered, but \"token-pool\" event will be sent later on the websocket.
No body
"},{"location":"reference/microservices/fftokens/#post-deactivatepool","title":"POST /deactivatepool","text":"Deactivate a token pool to stop receiving events and delete all blockchain listeners related to that pool.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"config\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string (OPTIONAL) The data string that was attached to this pool at activation. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Response
HTTP 204: deactivation was successful, and one or more listeners were deleted.
No body
HTTP 404: no blockchain listeners were found for the given pool information.
No body
"},{"location":"reference/microservices/fftokens/#post-checkinterface","title":"POST /checkinterface","text":"This is an optional (but recommended) API for token connectors. If implemented, support will be indicated by the presence of the interfaceFormat field in all Token Pool responses.
In the case that a connector supports multiple variants of a given token standard (such as many different ways to structure \"mint\" or \"burn\" calls on an underlying smart contract), this API allows the connector to be provided with a full description of the interface methods in use for a given token pool, so the connector can determine which methods it knows how to invoke.
Request
{\n \"poolLocator\": \"id=F1\",\n \"format\": \"abi\",\n \"methods\": [\n {\n \"name\": \"burn\",\n \"type\": \"function\",\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"tokenId\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\"\n },\n ...\n ]\n}\n Parameter Type Description poolLocator string The locator of the pool, as supplied by the output of the pool creation. format string enum The format of the data in this payload. Should match the interfaceFormat as supplied by the output of the pool creation. methods object array A list of all the methods available on the interface underpinning this token pool, encoded in the format specified by format. Response
HTTP 200: interface was successfully parsed, and methods of interest are returned in the body.
The response body includes a section for each type of token operation (burn/mint/transfer/approval), which specifies a subset of the input body useful to that operation. The caller (FireFly) can then store and provide the proper subset of the interface for every future token operation (via the interface parameter).
{\n \"burn\": {\n \"format\": \"abi\",\n \"methods\": [\n {\n \"name\": \"burn\",\n \"type\": \"function\",\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"tokenId\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\"\n }\n ]\n },\n \"mint\": { ... },\n \"transfer\": { ... },\n \"approval\": { ... }\n}\n"},{"location":"reference/microservices/fftokens/#post-mint","title":"POST /mint","text":"Mint new tokens.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"to\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"amount\": \"10\",\n \"tokenIndex\": \"1\",\n \"uri\": \"ipfs://000000\",\n \"requestId\": \"1\",\n \"data\": \"transfer-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. to string The identity to receive the minted tokens, in a format understood by this connector. amount number string The amount of tokens to mint. tokenIndex string (OPTIONAL) For non-fungible tokens that require choosing an index at mint time, the index of the specific token to mint. uri string (OPTIONAL) For non-fungible tokens that support choosing a URI at mint time, the URI to be attached to the token. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this mint request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the mint is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but mint will occur asynchronously, with \"receipt\" and \"token-mint\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-burn","title":"POST /burn","text":"Burn tokens.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"from\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"amount\": \"10\",\n \"tokenIndex\": \"1\",\n \"requestId\": \"1\",\n \"data\": \"transfer-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. from string The identity that currently owns the tokens to be burned, in a format understood by this connector. amount number string The amount of tokens to burn. tokenIndex string (OPTIONAL) For non-fungible tokens, the index of the specific token to burn. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this burn request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the burn is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but burn will occur asynchronously, with \"receipt\" and \"token-burn\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-transfer","title":"POST /transfer","text":"Transfer tokens from one address to another.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"from\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"to\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"amount\": \"1\",\n \"tokenIndex\": \"1\",\n \"requestId\": \"1\",\n \"data\": \"transfer-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. from string The identity to be used for the source of the transfer, in a format understood by this connector. to string The identity to be used for the destination of the transfer, in a format understood by this connector. amount number string The amount of tokens to transfer. tokenIndex string (OPTIONAL) For non-fungible tokens, the index of the specific token to transfer. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this transfer request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the transfer is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but transfer will occur asynchronously, with \"receipt\" and \"token-transfer\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-approval","title":"POST /approval","text":"Approve another identity to manage tokens.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"operator\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"approved\": true,\n \"requestId\": \"1\",\n \"data\": \"approval-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. operator string The identity to be approved (or unapproved) for managing the signer's tokens. approved boolean Whether to approve (the default) or unapprove. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this approval request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the approval is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but approval will occur asynchronously, with \"receipt\" and \"token-approval\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#websocket-commands","title":"Websocket Commands","text":"In order to start listening for events on a certain namespace, the client needs to send the start command. Clients should send this command every time they connect, or after an automatic reconnect.
{\n \"type\": \"start\",\n \"namespace\": \"default\"\n}\n"},{"location":"reference/microservices/fftokens/#websocket-events","title":"Websocket Events","text":"A connector should expose a websocket at /api/ws. All emitted websocket events are a JSON string of the form:
{\n \"id\": \"event-id\",\n \"event\": \"event-name\",\n \"data\": {}\n}\n The event name will match one of the names listed below, and the data payload will correspond to the linked response object.
All events except the receipt event must be acknowledged by sending an ack of the form:
{\n \"event\": \"ack\",\n \"data\": {\n \"id\": \"event-id\"\n }\n}\n Many messages may also be batched into a single websocket event of the form:
{\n \"id\": \"event-id\",\n \"event\": \"batch\",\n \"data\": {\n \"events\": [\n {\n \"event\": \"event-name\",\n \"data\": {}\n },\n ...\n ]\n }\n}\n Batched messages must be acked all at once using the ID of the batch.
"},{"location":"reference/microservices/fftokens/#receipt","title":"receipt","text":"An asynchronous operation has completed.
See Response Types: Receipt
"},{"location":"reference/microservices/fftokens/#token-pool","title":"token-pool","text":"A new token pool has been created or activated.
See Response Types: Token Pool
"},{"location":"reference/microservices/fftokens/#token-mint","title":"token-mint","text":"Tokens have been minted.
See Response Types: Token Transfer
"},{"location":"reference/microservices/fftokens/#token-burn","title":"token-burn","text":"Tokens have been burned.
See Response Types: Token Transfer
"},{"location":"reference/microservices/fftokens/#token-transfer","title":"token-transfer","text":"Tokens have been transferred.
See Response Types: Token Transfer
"},{"location":"reference/microservices/fftokens/#token-approval","title":"token-approval","text":"Token approvals have changed.
See Response Types: Token Approval
"},{"location":"reference/microservices/fftokens/#response-types","title":"Response Types","text":""},{"location":"reference/microservices/fftokens/#async-request","title":"Async Request","text":"Many operations may happen asynchronously in the background, and will return only a request ID. This may be a request ID that was passed in, or if none was passed, will be randomly assigned. This ID can be used to correlate with a receipt event later received on the websocket.
{\n \"id\": \"b84ab27d-0d50-42a6-9c26-2fda5eb901ba\"\n}\n"},{"location":"reference/microservices/fftokens/#receipt_1","title":"Receipt","text":" \"headers\": {\n \"type\": \"\",\n \"requestId\": \"\"\n }\n \"transactionHash\": \"\",\n \"errorMessage\": \"\"\n}\n Parameter Type Description headers.type string enum The type of this response. Should be \"TransactionSuccess\", \"TransactionUpdate\", or \"TransactionFailed\". headers.requestId string The ID of the request to which this receipt should correlate. transactionHash string The unique identifier for the blockchain transaction which generated this receipt. errorMessage string (OPTIONAL) If this is a failure, contains details on the reason for the failure."},{"location":"reference/microservices/fftokens/#token-pool_1","title":"Token Pool","text":"{\n \"namespace\": \"default\",\n \"type\": \"fungible\",\n \"data\": \"pool-metadata\",\n \"poolLocator\": \"id=F1\",\n \"standard\": \"ERC20\",\n \"interfaceFormat\": \"abi\",\n \"symbol\": \"FFC\",\n \"decimals\": 18,\n \"info\": {},\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"blockchain\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool type string enum The type of pool that was created. data string A copy of the data that was passed in on the creation request. poolLocator string A string to identify this pool, generated by the connector. Must be unique for each pool created by this connector. Will be passed back on all operations within this pool, and may be packed with relevant data about the pool for later usage (such as the address and type of the pool). standard string (OPTIONAL) The name of a well-defined token standard to which this pool conforms. interfaceFormat string enum (OPTIONAL) If this connector supports the /checkinterface API, this is the interface format to be used for describing the interface underpinning this pool. Must be \"abi\" or \"ffi\". symbol string (OPTIONAL) The symbol for this token pool, if applicable. decimals number (OPTIONAL) The number of decimals used for balances in this token pool, if applicable. info object (OPTIONAL) Additional information about the pool. Each connector may define the format for this object. signer string (OPTIONAL) If this operation triggered a blockchain transaction, the signing identity used for the transaction. blockchain object (OPTIONAL) If this operation triggered a blockchain transaction, contains details on the blockchain event in FireFly's standard blockchain event format."},{"location":"reference/microservices/fftokens/#token-transfer_1","title":"Token Transfer","text":"Note that mint and burn operations are just specialized versions of transfer. A mint will omit the \"from\" field, while a burn will omit the \"to\" field.
{\n \"namespace\": \"default\",\n \"id\": \"1\",\n \"data\": \"transfer-metadata\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"from\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"to\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"amount\": \"1\",\n \"tokenIndex\": \"1\",\n \"uri\": \"ipfs://000000\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"blockchain\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool id string An identifier for this transfer. Must be unique for every transfer within this pool. data string A copy of the data that was passed in on the mint/burn/transfer request. May be omitted if the token contract does not support a method of attaching extra data (will result in reduced ability for FireFly to correlate the inputs and outputs of the transaction). poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string The extra data associated with the pool at pool activation. from string The identity used for the source of the transfer. to string The identity used for the destination of the transfer. amount number string The amount of tokens transferred. tokenIndex string (OPTIONAL) For non-fungible tokens, the index of the specific token transferred. uri string (OPTIONAL) For non-fungible tokens, the URI attached to the token. signer string (OPTIONAL) If this operation triggered a blockchain transaction, the signing identity used for the transaction. blockchain object (OPTIONAL) If this operation triggered a blockchain transaction, contains details on the blockchain event in FireFly's standard blockchain event format."},{"location":"reference/microservices/fftokens/#token-approval_1","title":"Token Approval","text":"{\n \"namespace\": \"default\",\n \"id\": \"1\",\n \"data\": \"transfer-metadata\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"operator\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"approved\": true,\n \"subject\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A:0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"info\": {},\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"blockchain\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool id string An identifier for this approval. Must be unique for every approval within this pool. data string A copy of the data that was passed in on the approval request. May be omitted if the token contract does not support a method of attaching extra data (will result in reduced ability for FireFly to correlate the inputs and outputs of the transaction). poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string The extra data associated with the pool at pool activation. operator string The identity that was approved (or unapproved) for managing tokens. approved boolean Whether this was an approval or unapproval. subject string A string identifying the scope of the approval, generated by the connector. Approvals with the same subject are understood replace one another, so that a previously-recorded approval becomes inactive. This string may be a combination of the identities involved, the token index, etc. info object (OPTIONAL) Additional information about the approval. Each connector may define the format for this object. signer string (OPTIONAL) If this operation triggered a blockchain transaction, the signing identity used for the transaction. blockchain object (OPTIONAL) If this operation triggered a blockchain transaction, contains details on the blockchain event in FireFly's standard blockchain event format."},{"location":"reference/types/batch/","title":"Batch","text":"A batch bundles a number of off-chain messages, with associated data, into a single payload for broadcast or private transfer.
This allows the transfer of many messages (hundreds) to be backed by a single blockchain transaction. Thus making very efficient use of the blockchain.
The same benefit also applies to the off-chain transport mechanism.
Shared storage operations benefit from the same optimization. In IPFS for example chunks are 256Kb in size, so there is a great throughput benefit in packaging many small messages into a single large payload.
For a data exchange transport, there is often cryptography and transport overhead for each individual transport level send between participants. This is particularly true if using a data exchange transport with end-to-end payload encryption, using public/private key cryptography for the envelope.
"},{"location":"reference/types/batch/#example","title":"Example","text":"{\n \"id\": \"894bc0ea-0c2e-4ca4-bbca-b4c39a816bbb\",\n \"type\": \"private\",\n \"namespace\": \"ns1\",\n \"node\": \"5802ab80-fa71-4f52-9189-fb534de93756\",\n \"group\": \"cd1fedb69fb83ad5c0c62f2f5d0b04c59d2e41740916e6815a8e063b337bd32e\",\n \"created\": \"2022-05-16T01:23:16Z\",\n \"author\": \"did:firefly:org/example\",\n \"key\": \"0x0a989907dcd17272257f3ebcf72f4351df65a846\",\n \"hash\": \"78d6861f860c8724468c9254b99dc09e7d9fd2d43f26f7bd40ecc9ee47be384d\",\n \"payload\": {\n \"tx\": {\n \"type\": \"private\",\n \"id\": \"04930d84-0227-4044-9d6d-82c2952a0108\"\n },\n \"messages\": [],\n \"data\": []\n }\n}\n"},{"location":"reference/types/batch/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the batch UUID type The type of the batch FFEnum:\"broadcast\"\"private\" namespace The namespace of the batch string node The UUID of the node that generated the batch UUID group The privacy group the batch is sent to, for private batches Bytes32 created The time the batch was sealed FFTime author The DID of identity of the submitter string key The on-chain signing key used to sign the transaction string hash The hash of the manifest of the batch Bytes32 payload Batch.payload BatchPayload"},{"location":"reference/types/batch/#batchpayload","title":"BatchPayload","text":"Field Name Description Type tx BatchPayload.tx TransactionRef messages BatchPayload.messages Message[] data BatchPayload.data Data[]"},{"location":"reference/types/batch/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/blockchainevent/","title":"BlockchainEvent","text":"Blockchain Events are detected by the blockchain plugin:
Each Blockchain Event (once final) exists in an absolute location somewhere in the transaction history of the blockchain. A particular slot, in a particular block.
How to describe that position contains blockchain specifics - depending on how a particular blockchain represents transactions, blocks and events (or \"logs\").
So FireFly is flexible with a string protocolId in the core object to represent this location, and then there is a convention that is adopted by the blockchain plugins to try and create some consistency.
An example protocolId string is: 000000000041/000020/000003
000000000041 - this is the block number000020 - this is the transaction index within that block000003 - this is the event (/log) index within that blockThe string is alphanumerically sortable as a plain string;
Sufficient zero padding is included at each layer to support future expansion without creating a string that would no longer sort correctly.
"},{"location":"reference/types/blockchainevent/#example","title":"Example","text":"{\n \"id\": \"e9bc4735-a332-4071-9975-b1066e51ab8b\",\n \"source\": \"ethereum\",\n \"namespace\": \"ns1\",\n \"name\": \"MyEvent\",\n \"listener\": \"c29b4595-03c2-411a-89e3-8b7f27ef17bb\",\n \"protocolId\": \"000000000048/000000/000000\",\n \"output\": {\n \"addr1\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad\",\n \"value2\": \"42\"\n },\n \"info\": {\n \"address\": \"0x57A9bE18CCB50D06B7567012AaF6031D669BBcAA\",\n \"blockHash\": \"0xae7382ef2573553f517913b927d8b9691ada8d617266b8b16f74bb37aa78cae8\",\n \"blockNumber\": \"48\",\n \"logIndex\": \"0\",\n \"signature\": \"Changed(address,uint256)\",\n \"subId\": \"sb-e4d5efcd-2eba-4ed1-43e8-24831353fffc\",\n \"timestamp\": \"1653048837\",\n \"transactionHash\": \"0x34b0327567fefed09ac7b4429549bc609302b08a9cbd8f019a078ec44447593d\",\n \"transactionIndex\": \"0x0\"\n },\n \"timestamp\": \"2022-05-16T01:23:15Z\",\n \"tx\": {\n \"blockchainId\": \"0x34b0327567fefed09ac7b4429549bc609302b08a9cbd8f019a078ec44447593d\"\n }\n}\n"},{"location":"reference/types/blockchainevent/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID assigned to the event by FireFly UUID source The blockchain plugin or token service that detected the event string namespace The namespace of the listener that detected this blockchain event string name The name of the event in the blockchain smart contract string listener The UUID of the listener that detected this event, or nil for built-in events in the system namespace UUID protocolId An alphanumerically sortable string that represents this event uniquely on the blockchain (convention for plugins is zero-padded values BLOCKNUMBER/TXN_INDEX/EVENT_INDEX) string output The data output by the event, parsed to JSON according to the interface of the smart contract JSONObject info Detailed blockchain specific information about the event, as generated by the blockchain connector JSONObject timestamp The time allocated to this event by the blockchain. This is the block timestamp for most blockchain connectors FFTime tx If this blockchain event is coorelated to FireFly transaction such as a FireFly submitted token transfer, this field is set to the UUID of the FireFly transaction BlockchainTransactionRef"},{"location":"reference/types/blockchainevent/#blockchaintransactionref","title":"BlockchainTransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID blockchainId The blockchain transaction ID, in the format specific to the blockchain involved in the transaction. Not all FireFly transactions include a blockchain string"},{"location":"reference/types/contractapi/","title":"ContractAPI","text":"Contract APIs provide generated REST APIs for on-chain smart contracts.
API endpoints are generated to invoke or perform query operations against each of the functions/methods implemented by the smart contract.
API endpoints are also provided to add listeners to the events of that smart contract.
Note that once you have established listeners for your blockchain events into FireFly, you need to also subscribe in your application to receive the FireFly events (of type blockchain_event_received) that are emitted for each detected blockchain event.
For more information see the Events reference section.
"},{"location":"reference/types/contractapi/#url","title":"URL","text":"The base path for your Contract API is:
/api/v1/namespaces/{ns}/apis/{apiName}For the default namespace, this can be shortened to:
/api/v1/apis/{apiName}Contract APIs are registered against:
A FireFly Interface (FFI) definition, which defines in a blockchain agnostic format the list of functions/events supported by the smart contract. Also detailed type information about the inputs/outputs to those functions/events.
An optional location configured on the Contract API describes where the instance of the smart contract the API should interact with exists in the blockchain layer. For example the address of the Smart Contract for an Ethereum based blockchain, or the name and channel for a Hyperledger Fabric based blockchain.
If the location is not specified on creation of the Contract API, then it must be specified on each API call made to the Contract API endpoints.
Each Contract API comes with an OpenAPI V3 / Swagger generated definition, which can be downloaded from:
/api/v1/namespaces/{namespaces}/apis/{apiName}/api/swagger.jsonA browser / exerciser UI for your API is also available on:
/api/v1/namespaces/{namespaces}/apis/{apiName}/api{\n \"id\": \"0f12317b-85a0-4a77-a722-857ea2b0a5fa\",\n \"namespace\": \"ns1\",\n \"interface\": {\n \"id\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\"\n },\n \"location\": {\n \"address\": \"0x95a6c4895c7806499ba35f75069198f45e88fc69\"\n },\n \"name\": \"my_contract_api\",\n \"message\": \"b09d9f77-7b16-4760-a8d7-0e3c319b2a16\",\n \"urls\": {\n \"api\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/my_contract_api\",\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/my_contract_api/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/my_contract_api/api\"\n },\n \"published\": false\n}\n"},{"location":"reference/types/contractapi/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the contract API UUID namespace The namespace of the contract API string interface Reference to the FireFly Interface definition associated with the contract API FFIReference location If this API is tied to an individual instance of a smart contract, this field can include a blockchain specific contract identifier. For example an Ethereum contract address, or a Fabric chaincode name and channel JSONAny name The name that is used in the URL to access the API string networkName The published name of the API within the multiparty network string message The UUID of the broadcast message that was used to publish this API to the network UUID urls The URLs to use to access the API ContractURLs published Indicates if the API is published to other members of the multiparty network bool"},{"location":"reference/types/contractapi/#ffireference","title":"FFIReference","text":"Field Name Description Type id The UUID of the FireFly interface UUID name The name of the FireFly interface string version The version of the FireFly interface string"},{"location":"reference/types/contractapi/#contracturls","title":"ContractURLs","text":"Field Name Description Type api The URL to use to invoke the API string openapi The URL to download the OpenAPI v3 (Swagger) description for the API generated in JSON or YAML format string ui The URL to use in a web browser to access the SwaggerUI explorer/exerciser for the API string"},{"location":"reference/types/contractlistener/","title":"ContractListener","text":"A contract listener configures FireFly to stream events from the blockchain, from a specific location on the blockchain, according to a given definition of the interface for that event.
Check out the Custom Contracts Tutorial for a walk-through of how to set up listeners for the events from your smart contracts.
See below for a deep dive into the format of contract listeners and important concepts to understand when managing them.
"},{"location":"reference/types/contractlistener/#event-filters","title":"Event filters","text":""},{"location":"reference/types/contractlistener/#multiple-filters","title":"Multiple filters","text":"From v1.3.1 onwards, a contract listener can be created with multiple filters under a single topic, when supported by the connector. Each filter contains:
In addition to this list of multiple filters, the listener specifies a single topic to identify the stream of events.
Creating a single listener that listens for multiple events will allow for the easiest management of listeners, and for strong ordering of the events that they process.
"},{"location":"reference/types/contractlistener/#single-filter","title":"Single filter","text":"Before v1.3.1, each contract listener would only support listening to one specific event from a contract interface. Each listener would be comprised of:
topic which determines the ordered stream that these events are part ofFor backwards compatibility, this format is still supported by the API.
"},{"location":"reference/types/contractlistener/#signature-strings","title":"Signature strings","text":""},{"location":"reference/types/contractlistener/#string-format","title":"String format","text":"Each filter is identified by a generated signature that matches a single event, and each contract listener is identified by a signature computed from its filters.
Ethereum provides a string standard for event signatures, of the form EventName(uint256,bytes). Prior to v1.3.1, the signature of each Ethereum contract listener would exactly follow this Ethereum format.
As of v1.3.1, Ethereum format signature strings have been changed in FireFly, because this format does not fully describe the event - particularly because each top-level parameter can in the ABI definition be marked as indexed. For example, while the following two Solidity events have the same signature, they are serialized differently due to the different placement of indexed parameters, and thus a listener must define both individually to be able to process them:
Transferevent Transfer(address indexed _from, address indexed _to, uint256 _value)\n Transferevent Transfer(address indexed _from, address indexed _to, uint256 indexed _tokenId);\n The two above are now expressed in the following manner by the FireFly Ethereum blockchain connector:
Transfer(address,address,uint256) [i=0,1]\nTransfer(address,address,uint256) [i=0,1,2]\n The [i=] listing at the end of the signature indicates the position of all parameters that are marked as indexed.
Building on the blockchain-specific signature format for each event, FireFly will then compute the final signature for each filter and each contract listener as follows:
0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1:Transfer(address,address,uint256) [i=0,1];FireFly restricts the creation of a contract listener containing duplicate filters.
This includes the special case where one filter is a superset of another filter, due to a wildcard location.
For example, if two filters are listening to the same event, but one has specified a location and the other hasn't, then the latter will be a superset, and already be listening to all the events matching the first filter. Creation of duplicate or superset filters within a single listener will be blocked.
"},{"location":"reference/types/contractlistener/#duplicate-listeners","title":"Duplicate listeners","text":"As noted above, each listener has a generated signature. This signature - containing all the locations and event signatures combined with the listener topic - will guarantee uniqueness of the contract listener. If you tried to create the same listener again, you would receive HTTP 409. This combination can allow a developer to assert that their listener exists, without the risk of creating duplicates.
Note: Prior to v1.3.1, FireFly would detect duplicates simply by requiring a unique combination of signature + topic + location for each listener. The updated behavior for the listener signature is intended to preserve similar functionality, even when dealing with listeners that contain many event filters.
"},{"location":"reference/types/contractlistener/#backwards-compatibility","title":"Backwards compatibility","text":"As noted throughout this document, the behavior of listeners has changed in v1.3.1. However, the following behaviors are retained for backwards-compatibility, to ensure that code written prior to v1.3.1 should continue to function.
listeners will continue to populate top-level event and location fieldsfilters array is duplicated to these fieldslistener, the event and location fields are still supportedfilters array with a single entrysignature field is preserved at the listener levelThe two input formats supported when creating a contract listener are shown below.
"},{"location":"reference/types/contractlistener/#with-event-definition","title":"With event definition","text":"In these examples, the event schema in the FireFly Interface format is provided describing the event and its parameters. See FireFly Interface Format
Muliple Filters
{\n \"filters\": [\n {\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n }\n },\n {\n \"event\": {\n \"name\": \"AnotherEvent\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"my-field\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa4ea5d0b6b2eaf194716f0cc73981939dca27da1\"\n }\n }\n ],\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n One filter (old format)
{\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n"},{"location":"reference/types/contractlistener/#with-interface-reference","title":"With interface reference","text":"These examples use an interface reference when creating the filters, the eventPath field is used to reference an event defined within the interface provided. In this case, we do not need to provide the event schema as the section above shows. See an example of creating a FireFly Interface for an EVM smart contract.
Muliple Filters
{\n \"filters\": [\n {\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"eventPath\": \"Changed\"\n },\n {\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa4ea5d0b6b2eaf194716f0cc73981939dca27da1\"\n },\n \"eventPath\": \"AnotherEvent\"\n }\n ],\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n One filter (old format)
{\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"eventPath\": \"Changed\",\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n"},{"location":"reference/types/contractlistener/#example","title":"Example","text":"{\n \"id\": \"d61980a9-748c-4c72-baf5-8b485b514d59\",\n \"interface\": {\n \"id\": \"ff1da3c1-f9e7-40c2-8d93-abb8855e8a1d\"\n },\n \"namespace\": \"ns1\",\n \"name\": \"contract1_events\",\n \"backendId\": \"sb-dd8795fc-a004-4554-669d-c0cf1ee2c279\",\n \"location\": {\n \"address\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf\"\n },\n \"created\": \"2022-05-16T01:23:15Z\",\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"signature\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf:Changed(uint256)\",\n \"topic\": \"app1_topic\",\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"filters\": [\n {\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf\"\n },\n \"signature\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf:Changed(uint256)\"\n }\n ]\n}\n"},{"location":"reference/types/contractlistener/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the smart contract listener UUID interface Deprecated: Please use 'interface' in the array of 'filters' instead FFIReference namespace The namespace of the listener, which defines the namespace of all blockchain events detected by this listener string name A descriptive name for the listener string backendId An ID assigned by the blockchain connector to this listener string location Deprecated: Please use 'location' in the array of 'filters' instead JSONAny created The creation time of the listener FFTime event Deprecated: Please use 'event' in the array of 'filters' instead FFISerializedEvent signature A concatenation of all the stringified signature of the event and location, as computed by the blockchain plugin string topic A topic to set on the FireFly event that is emitted each time a blockchain event is detected from the blockchain. Setting this topic on a number of listeners allows applications to easily subscribe to all events they need string options Options that control how the listener subscribes to events from the underlying blockchain ContractListenerOptions filters A list of filters for the contract listener. Each filter is made up of an Event and an optional Location. Events matching these filters will always be emitted in the order determined by the blockchain. ListenerFilter[]"},{"location":"reference/types/contractlistener/#ffireference","title":"FFIReference","text":"Field Name Description Type id The UUID of the FireFly interface UUID name The name of the FireFly interface string version The version of the FireFly interface string"},{"location":"reference/types/contractlistener/#ffiserializedevent","title":"FFISerializedEvent","text":"Field Name Description Type name The name of the event string description A description of the smart contract event string params An array of event parameter/argument definitions FFIParam[] details Additional blockchain specific fields about this event from the original smart contract. Used by the blockchain plugin and for documentation generation. JSONObject"},{"location":"reference/types/contractlistener/#ffiparam","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/contractlistener/#contractlisteneroptions","title":"ContractListenerOptions","text":"Field Name Description Type firstEvent A blockchain specific string, such as a block number, to start listening from. The special strings 'oldest' and 'newest' are supported by all blockchain connectors. Default is 'newest' string"},{"location":"reference/types/contractlistener/#listenerfilter","title":"ListenerFilter","text":"Field Name Description Type event The definition of the event, either provided in-line when creating the listener, or extracted from the referenced FFI when supplied FFISerializedEvent location A blockchain specific contract identifier. For example an Ethereum contract address, or a Fabric chaincode name and channel JSONAny interface A reference to an existing FFI, containing pre-registered type information for the event, used in combination with eventPath FFIReference signature The stringified signature of the event and location, as computed by the blockchain plugin string"},{"location":"reference/types/data/","title":"Data","text":"Data is a uniquely identified piece of data available for retrieval or transfer.
Multiple data items can be attached to a message when sending data off-chain to another party in a multi-party system. Note that if you pass data in-line when sending a message, those data elements will be stored separately to the message and available to retrieve separately later.
An UUID is allocated to each data resource.
A hash is also calculated as follows:
value serialized as JSON with no additional whitespace (order of the keys is retained from the original upload order).blob attachment, the hash is of the blob data.blob and a value, then the hash is a hash of the concatenation of a hash of the value and a hash of the blob.Each data resource can contain a value, which is any JSON type. String, number, boolean, array or object. This value is stored directly in the FireFly database.
If the value you are storing is not JSON data, but is small enough you want it to be stored in the core database, then use a JSON string to store an encoded form of your data (such as XML, CSV etc.).
"},{"location":"reference/types/data/#datatype-validation-of-agreed-data-types","title":"Datatype - validation of agreed data types","text":"A datatype can be associated with your data, causing FireFly to verify the value against a schema before accepting it (on upload, or receipt from another party in the network).
These datatypes are pre-established via broadcast messages, and support versioning. Use this system to enforce a set of common data types for exchange of data across your business network, and reduce the overhead of data verification\\ required in the application/integration tier.
More information in the Datatype section
"},{"location":"reference/types/data/#blob-binary-data-stored-via-the-data-exchange","title":"Blob - binary data stored via the Data Exchange","text":"Data resources can also contain a blob attachment, which is stored via the Data Exchange plugin outside of the FireFly core database. This is intended for large data payloads, which might be structured or unstructured. PDF documents, multi-MB XML payloads, CSV data exports, JPEG images video files etc.
A Data resource can contain both a value JSON payload, and a blob attachment, meaning that you bind a set of metadata to a binary payload. For example a set of extracted metadata from OCR processing of a PDF document.
One special case is a filename for a document. This pattern is so common for file/document management scenarios, that special handling is provided for it. If a JSON object is stored in value, and it has a property called name, then this value forms part of the data hash (as does every field in the value) and is stored in a separately indexed blob.name field.
The upload REST API provides an autometa form field, which can be set to ask FireFly core to automatically set the value to contain the filename, size, and MIME type from the file upload.
{\n \"id\": \"4f11e022-01f4-4c3f-909f-5226947d9ef0\",\n \"validator\": \"json\",\n \"namespace\": \"ns1\",\n \"hash\": \"5e2758423c99b799f53d3f04f587f5716c1ff19f1d1a050f40e02ea66860b491\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"datatype\": {\n \"name\": \"widget\",\n \"version\": \"v1.2.3\"\n },\n \"value\": {\n \"name\": \"filename.pdf\",\n \"a\": \"example\",\n \"b\": {\n \"c\": 12345\n }\n },\n \"blob\": {\n \"hash\": \"cef238f7b02803a799f040cdabe285ad5cd6db4a15cb9e2a1000f2860884c7ad\",\n \"size\": 12345,\n \"name\": \"filename.pdf\"\n }\n}\n"},{"location":"reference/types/data/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the data resource UUID validator The data validator type FFEnum: namespace The namespace of the data resource string hash The hash of the data resource. Derived from the value and the hash of any binary blob attachment Bytes32 created The creation time of the data resource FFTime datatype The optional datatype to use of validation of this data DatatypeRef value The value for the data, stored in the FireFly core database. Can be any JSON type - object, array, string, number or boolean. Can be combined with a binary blob attachment JSONAny public If the JSON value has been published to shared storage, this field is the id of the data in the shared storage plugin (IPFS hash etc.) string blob An optional hash reference to a binary blob attachment BlobRef"},{"location":"reference/types/data/#datatyperef","title":"DatatypeRef","text":"Field Name Description Type name The name of the datatype string version The version of the datatype. Semantic versioning is encouraged, such as v1.0.1 string"},{"location":"reference/types/data/#blobref","title":"BlobRef","text":"Field Name Description Type hash The hash of the binary blob data Bytes32 size The size of the binary data int64 name The name field from the metadata attached to the blob, commonly used as a path/filename, and indexed for search string path If a name is specified, this field stores the '/' prefixed and separated path extracted from the full name string public If the blob data has been published to shared storage, this field is the id of the data in the shared storage plugin (IPFS hash etc.) string"},{"location":"reference/types/datatype/","title":"Datatype","text":"A datatype defines the format of some data that can be shared between parties, in a way that FireFly can enforce consistency of that data against the schema.
Data that does not match the schema associated with it will not be accepted on upload to FireFly, and if this were bypassed by a participant in some way it would be rejected by all parties and result in a message_rejected event (rather than message_confirmed event).
Currently JSON Schema validation of data is supported.
The system for defining datatypes is pluggable, to support other schemes in the future, such as XML Schema, or CSV, EDI etc.
"},{"location":"reference/types/datatype/#example","title":"Example","text":"{\n \"id\": \"3a479f7e-ddda-4bda-aa24-56d06c0bf08e\",\n \"message\": \"bfcf904c-bdf7-40aa-bbd7-567f625c26c0\",\n \"validator\": \"json\",\n \"namespace\": \"ns1\",\n \"name\": \"widget\",\n \"version\": \"1.0.0\",\n \"hash\": \"639cd98c893fa45a9df6fd87bd0393a9b39e31e26fbb1eeefe90cb40c3fa02d2\",\n \"created\": \"2022-05-16T01:23:16Z\",\n \"value\": {\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n },\n \"additionalProperties\": false\n }\n}\n"},{"location":"reference/types/datatype/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the datatype UUID message The UUID of the broadcast message that was used to publish this datatype to the network UUID validator The validator that should be used to verify this datatype FFEnum:\"json\"\"none\"\"definition\" namespace The namespace of the datatype. Data resources can only be created referencing datatypes in the same namespace string name The name of the datatype string version The version of the datatype. Multiple versions can exist with the same name. Use of semantic versioning is encourages, such as v1.0.1 string hash The hash of the value, such as the JSON schema. Allows all parties to be confident they have the exact same rules for verifying data created against a datatype Bytes32 created The time the datatype was created FFTime value The definition of the datatype, in the syntax supported by the validator (such as a JSON Schema definition) JSONAny"},{"location":"reference/types/event/","title":"Event","text":"Every Event emitted by FireFly shares a common structure.
See Events for a reference for how the overall event bus in Hyperledger FireFly operates, and descriptions of all the sub-categories of events.
"},{"location":"reference/types/event/#sequence","title":"Sequence","text":"A local sequence number is assigned to each event, and you can use an API to query events using this sequence number in exactly the same order that they are delivered to your application.
Events have a reference to the UUID of an object that is the subject of the event, such as a detailed Blockchain Event, or an off-chain Message.
When events are delivered to your application, the reference field is automatically retrieved and included in the JSON payload that is delivered to your application.
You can use the ?fetchreferences query parameter on API calls to request the same in-line JSON payload be included in query results.
The type of the reference also determines what subscription filters apply when performing server-side filters.
Here is the mapping between event types, and the object that you find in the reference field.
For some event types, there is a secondary reference to an object that is associated with the event. This is set in a correlator field on the Event, but is not automatically fetched. This field is primarily used for the confirm option on API calls to allow FireFly to determine when a request has succeeded/failed.
Events have a topic, and how that topic is determined is specific to the type of event. This is intended to be a property you would use to filter events to your application, or query all historical events associated with a given business data stream.
For example when you send a Message, you set the topics you want that message to apply to, and FireFly ensures a consistent global order between all parties that receive that message.
When actions are submitted by a FireFly node, they are performed within a FireFly Transaction. The events that occur as a direct result of that transaction, are tagged with the transaction ID so that they can be grouped together.
This construct is a distinct higher level construct than a Blockchain transaction, that groups together a number of operations/events that might be on-chain or off-chain. In some cases, such as unpinned off-chain data transfer, a FireFly transaction can exist when there is no blockchain transaction at all. Wherever possible you will find that FireFly tags the FireFly transaction with any associated Blockchain transaction(s).
Note that some events cannot be tagged with a Transaction ID:
data payload in the event they emitted)transaction_submitted Transaction transaction.type message_confirmedmessage_rejected Message message.header.topics[i]* message.header.cid token_pool_confirmed TokenPool tokenPool.id token_pool_op_failed Operation tokenPool.id tokenPool.id token_transfer_confirmed TokenTransfer tokenPool.id token_transfer_op_failed Operation tokenPool.id tokenTransfer.localId token_approval_confirmed TokenApproval tokenPool.id token_approval_op_failed Operation tokenPool.id tokenApproval.localId namespace_confirmed Namespace \"ff_definition\" datatype_confirmed Datatype \"ff_definition\" identity_confirmedidentity_updated Identity \"ff_definition\" contract_interface_confirmed FFI \"ff_definition\" contract_api_confirmed ContractAPI \"ff_definition\" blockchain_event_received BlockchainEvent From listener ** blockchain_invoke_op_succeeded Operation blockchain_invoke_op_failed Operation blockchain_contract_deploy_op_succeeded Operation blockchain_contract_deploy_op_failed Operation ** The topic for a blockchain event is inherited from the blockchain listener, allowing you to create multiple blockchain listeners that all deliver messages to your application on a single FireFly topic.
"},{"location":"reference/types/event/#example","title":"Example","text":"{\n \"id\": \"5f875824-b36b-4559-9791-a57a2e2b30dd\",\n \"sequence\": 168,\n \"type\": \"transaction_submitted\",\n \"namespace\": \"ns1\",\n \"reference\": \"0d12aa75-5ed8-48a7-8b54-45274c6edcb1\",\n \"tx\": \"0d12aa75-5ed8-48a7-8b54-45274c6edcb1\",\n \"topic\": \"batch_pin\",\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/event/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID assigned to this event by your local FireFly node UUID sequence A sequence indicating the order in which events are delivered to your application. Assure to be unique per event in your local FireFly database (unlike the created timestamp) int64 type All interesting activity in FireFly is emitted as a FireFly event, of a given type. The 'type' combined with the 'reference' can be used to determine how to process the event within your application FFEnum:\"transaction_submitted\"\"message_confirmed\"\"message_rejected\"\"datatype_confirmed\"\"identity_confirmed\"\"identity_updated\"\"token_pool_confirmed\"\"token_pool_op_failed\"\"token_transfer_confirmed\"\"token_transfer_op_failed\"\"token_approval_confirmed\"\"token_approval_op_failed\"\"contract_interface_confirmed\"\"contract_api_confirmed\"\"blockchain_event_received\"\"blockchain_invoke_op_succeeded\"\"blockchain_invoke_op_failed\"\"blockchain_contract_deploy_op_succeeded\"\"blockchain_contract_deploy_op_failed\" namespace The namespace of the event. Your application must subscribe to events within a namespace string reference The UUID of an resource that is the subject of this event. The event type determines what type of resource is referenced, and whether this field might be unset UUID correlator For message events, this is the 'header.cid' field from the referenced message. For certain other event types, a secondary object is referenced such as a token pool UUID tx The UUID of a transaction that is event is part of. Not all events are part of a transaction UUID topic A stream of information this event relates to. For message confirmation events, a separate event is emitted for each topic in the message. For blockchain events, the listener specifies the topic. Rules exist for how the topic is set for other event types string created The time the event was emitted. Not guaranteed to be unique, or to increase between events in the same order as the final sequence events are delivered to your application. As such, the 'sequence' field should be used instead of the 'created' field for querying events in the exact order they are delivered to applications FFTime"},{"location":"reference/types/ffi/","title":"FFI","text":"See FireFly Interface Format
"},{"location":"reference/types/ffi/#example","title":"Example","text":"{\n \"id\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"message\": \"e4ad2077-5714-416e-81f9-7964a6223b6f\",\n \"namespace\": \"ns1\",\n \"name\": \"SimpleStorage\",\n \"description\": \"A simple example contract in Solidity\",\n \"version\": \"v0.0.1\",\n \"methods\": [\n {\n \"id\": \"8f3289dd-3a19-4a9f-aab3-cb05289b013c\",\n \"interface\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"name\": \"get\",\n \"namespace\": \"ns1\",\n \"pathname\": \"get\",\n \"description\": \"Get the current value\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"output\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ],\n \"details\": {\n \"stateMutability\": \"viewable\"\n }\n },\n {\n \"id\": \"fc6f54ee-2e3c-4e56-b17c-4a1a0ae7394b\",\n \"interface\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"name\": \"set\",\n \"namespace\": \"ns1\",\n \"pathname\": \"set\",\n \"description\": \"Set the value\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": [],\n \"details\": {\n \"stateMutability\": \"payable\"\n }\n }\n ],\n \"events\": [\n {\n \"id\": \"9f653f93-86f4-45bc-be75-d7f5888fbbc0\",\n \"interface\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"namespace\": \"ns1\",\n \"pathname\": \"Changed\",\n \"signature\": \"Changed(address,uint256)\",\n \"name\": \"Changed\",\n \"description\": \"Emitted when the value changes\",\n \"params\": [\n {\n \"name\": \"_from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"_value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ]\n }\n ],\n \"published\": false\n}\n"},{"location":"reference/types/ffi/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the FireFly interface (FFI) smart contract definition UUID message The UUID of the broadcast message that was used to publish this FFI to the network UUID namespace The namespace of the FFI string name The name of the FFI - usually matching the smart contract name string networkName The published name of the FFI within the multiparty network string description A description of the smart contract this FFI represents string version A version for the FFI - use of semantic versioning such as 'v1.0.1' is encouraged string methods An array of smart contract method definitions FFIMethod[] events An array of smart contract event definitions FFIEvent[] errors An array of smart contract error definitions FFIError[] published Indicates if the FFI is published to other members of the multiparty network bool"},{"location":"reference/types/ffi/#ffimethod","title":"FFIMethod","text":"Field Name Description Type id The UUID of the FFI method definition UUID interface The UUID of the FFI smart contract definition that this method is part of UUID name The name of the method string namespace The namespace of the FFI string pathname The unique name allocated to this method within the FFI for use on URL paths. Supports contracts that have multiple method overrides with the same name string description A description of the smart contract method string params An array of method parameter/argument definitions FFIParam[] returns An array of method return definitions FFIParam[] details Additional blockchain specific fields about this method from the original smart contract. Used by the blockchain plugin and for documentation generation. JSONObject"},{"location":"reference/types/ffi/#ffiparam","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/ffi/#ffievent","title":"FFIEvent","text":"Field Name Description Type id The UUID of the FFI event definition UUID interface The UUID of the FFI smart contract definition that this event is part of UUID namespace The namespace of the FFI string pathname The unique name allocated to this event within the FFI for use on URL paths. Supports contracts that have multiple event overrides with the same name string signature The stringified signature of the event, as computed by the blockchain plugin string name The name of the event string description A description of the smart contract event string params An array of event parameter/argument definitions FFIParam[] details Additional blockchain specific fields about this event from the original smart contract. Used by the blockchain plugin and for documentation generation. JSONObject"},{"location":"reference/types/ffi/#ffiparam_1","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/ffi/#ffierror","title":"FFIError","text":"Field Name Description Type id The UUID of the FFI error definition UUID interface The UUID of the FFI smart contract definition that this error is part of UUID namespace The namespace of the FFI string pathname The unique name allocated to this error within the FFI for use on URL paths string signature The stringified signature of the error, as computed by the blockchain plugin string name The name of the error string description A description of the smart contract error string params An array of error parameter/argument definitions FFIParam[]"},{"location":"reference/types/ffi/#ffiparam_2","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/group/","title":"Group","text":"A privacy group is a list of identities that should receive a private communication.
When you send a private message, you can specify the list of participants in-line and it will be resolved to a group. Or you can reference the group using its identifying hash.
The sender of a message must be included in the group along with the other participants. The sender receives an event confirming the message, just as any other participant would do.
The sender is included automatically in the group when members are specified in-line, if it is omitted.
"},{"location":"reference/types/group/#group-identity-hash","title":"Group identity hash","text":"The identifying hash for a group is determined as follows:
namespace, name, and members array are then serialized into a JSON object, without whitespace.The mechanism that keeps data private and ordered, without leaking data to the blockchain, is summarized in the below diagram.
The key points are:
name of the group can be used as an additional salt in generation of the group hashtopic+group) is combined with a nonce that is incremented for each individual sender, to form a message-specific hash.See NextPin for more information on the structure used for storing the next expected masked context pin, for each member of the privacy group.
"},{"location":"reference/types/group/#example","title":"Example","text":"{\n \"namespace\": \"ns1\",\n \"name\": \"\",\n \"members\": [\n {\n \"identity\": \"did:firefly:org/1111\",\n \"node\": \"4f563179-b4bd-4161-86e0-c2c1c0869c4f\"\n },\n {\n \"identity\": \"did:firefly:org/2222\",\n \"node\": \"61a99af8-c1f7-48ea-8fcc-489e4822a0ed\"\n }\n ],\n \"localNamespace\": \"ns1\",\n \"message\": \"0b9dfb76-103d-443d-92fd-b114fe07c54d\",\n \"hash\": \"c52ad6c034cf5c7382d9a294f49297096a52eb55cc2da696c564b2a276633b95\",\n \"created\": \"2022-05-16T01:23:16Z\"\n}\n"},{"location":"reference/types/group/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type namespace The namespace of the group within the multiparty network string name The optional name of the group, allowing multiple unique groups to exist with the same list of recipients string members The list of members in this privacy group Member[] localNamespace The local namespace of the group string message The message used to broadcast this group privately to the members UUID hash The identifier hash of this group. Derived from the name and group members Bytes32 created The time when the group was first used to send a message in the network FFTime"},{"location":"reference/types/group/#member","title":"Member","text":"Field Name Description Type identity The DID of the group member string node The UUID of the node that receives a copy of the off-chain message for the identity UUID"},{"location":"reference/types/identity/","title":"Identity","text":"FireFly contains an address book of identities, which is managed in a decentralized way across a multi-party system through claim and verification system.
See FIR-12 for evolution that is happening to Hyperledger FireFly to allow:
Root identities are registered with only a claim - which is a signed transaction from a particular blockchain account, to bind a DID with a name that is unique within the network, to that signing key.
The signing key then becomes a Verifier for that identity. Using that key the root identity can be used to register a new FireFly node in the network, send and receive messages, and register child identities.
When child identities are registered, a claim using a key that is going to be the Verifier for that child identity is required. However, this is insufficient to establish that identity as a child identity of the parent. There must be an additional verification that references the claim (by UUID) using the key verifier of the parent identity.
FireFly has adopted the DID standard for representing identities. A \"DID Method\" name of firefly is used to represent that the built-in identity system of Hyperledger FireFly is being used to resolve these DIDs.
So an example FireFly DID for organization abcd1234 is:
did:firefly:org/abcd1234The adoption of DIDs in Hyperledger FireFly v1.0 is also a stepping stone to allowing pluggable DID based identity resolvers into FireFly in the future.
You can also download a DID Document for a FireFly identity, which represents the verifiers and other information about that identity according to the JSON format in the DID standard.
"},{"location":"reference/types/identity/#example","title":"Example","text":"{\n \"id\": \"114f5857-9983-46fb-b1fc-8c8f0a20846c\",\n \"did\": \"did:firefly:org/org_1\",\n \"type\": \"org\",\n \"parent\": \"688072c3-4fa0-436c-a86b-5d89673b8938\",\n \"namespace\": \"ff_system\",\n \"name\": \"org_1\",\n \"messages\": {\n \"claim\": \"911b364b-5863-4e49-a3f8-766dbbae7c4c\",\n \"verification\": \"24636f11-c1f9-4bbb-9874-04dd24c7502f\",\n \"update\": null\n },\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/identity/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the identity UUID did The DID of the identity. Unique across namespaces within a FireFly network string type The type of the identity FFEnum:\"org\"\"node\"\"custom\" parent The UUID of the parent identity. Unset for root organization identities UUID namespace The namespace of the identity. Organization and node identities are always defined in the ff_system namespace string name The name of the identity. The name must be unique within the type and namespace string description A description of the identity. Part of the updatable profile information of an identity string profile A set of metadata for the identity. Part of the updatable profile information of an identity JSONObject messages References to the broadcast messages that established this identity and proved ownership of the associated verifiers (keys) IdentityMessages created The creation time of the identity FFTime updated The last update time of the identity profile FFTime"},{"location":"reference/types/identity/#identitymessages","title":"IdentityMessages","text":"Field Name Description Type claim The UUID of claim message UUID verification The UUID of claim message. Unset for root organization identities UUID update The UUID of the most recently applied update message. Unset if no updates have been confirmed UUID"},{"location":"reference/types/message/","title":"Message","text":"Message is the envelope by which coordinated data exchange can happen between parties in the network. Data is passed by reference in these messages, and a chain of hashes covering the data and the details of the message, provides a verification against tampering.
A message is made up of three sections:
Sections (1) and (2) are fixed once the message is sent, and a hash is generated that provides tamper protection.
The hash is a function of the header, and all of the data payloads. Calculated as follows:
[{\"id\":\"{{DATA_UUID}}\",\"hash\":\"{{DATA_HASH}}\"}] is hashed, and that hash is stored in header.datahashheader is serialized as JSON with the deterministic order (listed below) and hashedEach node independently calculates the hash, and the hash is included in the manifest of the Batch by the node that sends the message. Because the hash of that batch manifest is included in the blockchain transaction, a message transferred to a node that does not match the original message hash is rejected.
"},{"location":"reference/types/message/#tag","title":"Tag","text":"The header.tag tells the processors of the message how it should be processed, and what data they should expect it to contain.
If you think of your decentralized application like a state machine, then you need to have a set of well defined transitions that can be performed between states. Each of these transitions that requires off-chain transfer of private data (optionally coordinated with an on-chain transaction) should be expressed as a type of message, with a particular tag.
Every copy of the application that runs in the participants of the network should look at this tag to determine what logic to execute against it.
Note: For consistency in ordering, the sender should also wait to process the state machine transitions associated with the message they send until it is ordered by the blockchain. They should not consider themselves special because they sent the message, and process it immediately - otherwise they could end up processing it in a different order to other parties in the network that are also processing the message.
"},{"location":"reference/types/message/#topics","title":"Topics","text":"The header.topics strings allow you to set the the ordering context for each message you send, and you are strongly encouraged to set it explicitly on every message you send (falling back to the default topic is not recommended).
A key difference between blockchain backed decentralized applications and other event-driven applications, is that there is a single source of truth for the order in which things happen.
In a multi-party system with off-chain transfer of data as well as on-chain transfer of data, the two sets of data need to be coordinated together. The off-chain transfer might happen at different times, and is subject to the reliability of the parties & network links involved in that off-chain communication.
A \"stop the world\" approach to handling a single piece of missing data is not practical for a high volume production business network.
The ordering context is a function of:
topic of the messageWhen an on-chain transaction is detected by FireFly, it can determine the above ordering - noting that privacy is preserved for private messages by masking this ordering context message-by-message with a nonce and the group ID, so that only the participants in that group can decode the ordering context.
If a piece of off-chain data is unavailable, then the FireFly node will block only streams of data that are associated with that ordering context.
For your application, you should choose the most granular identifier you can for your topic to minimize the scope of any blockage if one item of off-chain data fails to be delivered or is delayed. Some good examples are:
There are some advanced scenarios where you need to merge streams of ordered data, so that two previously separately ordered streams of communication (different state machines) are joined together to process a critical decision/transition in a deterministic order.
A synchronization point between two otherwise independent streams of communication.
To do this, simply specify two topics in the message you sent, and the message will be independently ordered against both of those topics.
You will also receive two events for the confirmation of that message, one for each topic.
Some examples:
000001 and 000002, by discarding business transaction 000001 as a duplicatetopics: [\"000001\",\"000002\"] on the special merge message, and then from that point onwards you would only need to specify topics: [\"000002\"].id1 and id2, into a merged entity with id3.topics: [\"id1\",\"id2\",\"id3\"] on the special merge message, and then from that point onwards you would only need to specify topics: [\"id3\"].By default messages are pinned to the blockchain, within a Batch.
For private messages, you can choose to disable this pinning by setting header.txtype: \"unpinned\".
Broadcast messages must be pinned to the blockchain.
"},{"location":"reference/types/message/#in-line-data","title":"In-line data","text":"When sending a message you can specify the array of Data attachments in-line, as part of the same JSON payload.
For example, a minimal broadcast message could be:
{\n \"data\": [\n {\"value\": \"hello world\"}\n ]\n}\n When you send this message with /api/v1/namespaces/{ns}/messages/broadcast:
header will be initialized with the default values, including txtype: \"batch_pin\"data[0] entry will be stored as a Data resource{\n \"header\": {\n \"id\": \"4ea27cce-a103-4187-b318-f7b20fd87bf3\",\n \"cid\": \"00d20cba-76ed-431d-b9ff-f04b4cbee55c\",\n \"type\": \"private\",\n \"txtype\": \"batch_pin\",\n \"author\": \"did:firefly:org/acme\",\n \"key\": \"0xD53B0294B6a596D404809b1d51D1b4B3d1aD4945\",\n \"created\": \"2022-05-16T01:23:10Z\",\n \"namespace\": \"ns1\",\n \"group\": \"781caa6738a604344ae86ee336ada1b48a404a85e7041cf75b864e50e3b14a22\",\n \"topics\": [\n \"topic1\"\n ],\n \"tag\": \"blue_message\",\n \"datahash\": \"c07be180b147049baced0b6219d9ce7a84ab48f2ca7ca7ae949abb3fe6491b54\"\n },\n \"localNamespace\": \"ns1\",\n \"state\": \"confirmed\",\n \"confirmed\": \"2022-05-16T01:23:16Z\",\n \"data\": [\n {\n \"id\": \"fdf9f118-eb81-4086-a63d-b06715b3bb4e\",\n \"hash\": \"34cf848d896c83cdf433ea7bd9490c71800b316a96aac3c3a78a42a4c455d67d\"\n }\n ]\n}\n"},{"location":"reference/types/message/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type header The message header contains all fields that are used to build the message hash MessageHeader localNamespace The local namespace of the message string hash The hash of the message. Derived from the header, which includes the data hash Bytes32 batch The UUID of the batch in which the message was pinned/transferred UUID txid The ID of the transaction used to order/deliver this message UUID state The current state of the message FFEnum:\"staged\"\"ready\"\"sent\"\"pending\"\"confirmed\"\"rejected\"\"cancelled\" confirmed The timestamp of when the message was confirmed/rejected FFTime rejectReason If a message was rejected, provides details on the rejection reason string data The list of data elements attached to the message DataRef[] pins For private messages, a unique pin hash:nonce is assigned for each topic string[] idempotencyKey An optional unique identifier for a message. Cannot be duplicated within a namespace, thus allowing idempotent submission of messages to the API. Local only - not transferred when the message is sent to other members of the network IdempotencyKey"},{"location":"reference/types/message/#messageheader","title":"MessageHeader","text":"Field Name Description Type id The UUID of the message. Unique to each message UUID cid The correlation ID of the message. Set this when a message is a response to another message UUID type The type of the message FFEnum:\"definition\"\"broadcast\"\"private\"\"groupinit\"\"transfer_broadcast\"\"transfer_private\"\"approval_broadcast\"\"approval_private\" txtype The type of transaction used to order/deliver this message FFEnum:\"none\"\"unpinned\"\"batch_pin\"\"network_action\"\"token_pool\"\"token_transfer\"\"contract_deploy\"\"contract_invoke\"\"contract_invoke_pin\"\"token_approval\"\"data_publish\" author The DID of identity of the submitter string key The on-chain signing key used to sign the transaction string created The creation time of the message FFTime namespace The namespace of the message within the multiparty network string topics A message topic associates this message with an ordered stream of data. A custom topic should be assigned - using the default topic is discouraged string[] tag The message tag indicates the purpose of the message to the applications that process it string datahash A single hash representing all data in the message. Derived from the array of data ids+hashes attached to this message Bytes32 txparent The parent transaction that originally triggered this message TransactionRef"},{"location":"reference/types/message/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/message/#dataref","title":"DataRef","text":"Field Name Description Type id The UUID of the referenced data resource UUID hash The hash of the referenced data Bytes32"},{"location":"reference/types/namespace/","title":"Namespace","text":"A namespace is a logical isolation domain for different applications, or tenants, that share the FireFly node.
Significant evolution of the Hyperledger FireFly namespace construct, is proposed under FIR-12
"},{"location":"reference/types/namespace/#example","title":"Example","text":"{\n \"name\": \"default\",\n \"networkName\": \"default\",\n \"description\": \"Default predefined namespace\",\n \"created\": \"2022-05-16T01:23:16Z\"\n}\n"},{"location":"reference/types/namespace/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type name The local namespace name string networkName The shared namespace name within the multiparty network string description A description of the namespace string created The time the namespace was created FFTime"},{"location":"reference/types/nextpin/","title":"NextPin","text":"Next-pins are maintained by each member of a privacy group, in order to detect if a on-chain transaction with a given \"pin\" for a message represents the next message for any member of the privacy group.
This allows every member to maintain a global order of transactions within a topic in a privacy group, without leaking the same hash between the messages that are communicated in that group.
See Group for more information on privacy groups.
"},{"location":"reference/types/nextpin/#example","title":"Example","text":"{\n \"namespace\": \"ns1\",\n \"context\": \"a25b65cfe49e5ed78c256e85cf07c96da938144f12fcb02fe4b5243a4631bd5e\",\n \"identity\": \"did:firefly:org/example\",\n \"hash\": \"00e55c63905a59782d5bc466093ead980afc4a2825eb68445bcf1312cc3d6de2\",\n \"nonce\": 12345\n}\n"},{"location":"reference/types/nextpin/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type namespace The namespace of the next-pin string context The context the next-pin applies to - the hash of the privacy group-hash + topic. The group-hash is only known to the participants (can itself contain a salt in the group-name). This context is combined with the member and nonce to determine the final hash that is written on-chain Bytes32 identity The member of the privacy group the next-pin applies to string hash The unique masked pin string Bytes32 nonce The numeric index - which is monotonically increasing for each member of the privacy group int64"},{"location":"reference/types/operation/","title":"Operation","text":"Operations are stateful external actions that FireFly triggers via plugins. They can succeed or fail. They are grouped into Transactions in order to accomplish a single logical task.
The diagram below shows the different types of operation that are performed by each FireFly plugin type. The color coding (and numbers) map those different types of operation to the Transaction types that include those operations.
"},{"location":"reference/types/operation/#operation-status","title":"Operation status","text":"When initially created an operation is in Initialized state. Once the operation has been successfully sent to its respective plugin to be processed its status moves to Pending state. This indicates that the plugin is processing the operation. The operation will then move to Succeeded or Failed state depending on the outcome.
In the event that an operation could not be submitted to the plugin for processing, for example because the plugin's microservice was temporarily unavailable, the operation will remain in Initialized state. Re-submitting the same FireFly API call using the same idempotency key will cause FireFly to re-submit the operation to its plugin.
{\n \"id\": \"04a8b0c4-03c2-4935-85a1-87d17cddc20a\",\n \"namespace\": \"ns1\",\n \"tx\": \"99543134-769b-42a8-8be4-a5f8873f969d\",\n \"type\": \"sharedstorage_upload_batch\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ipfs\",\n \"input\": {\n \"id\": \"80d89712-57f3-48fe-b085-a8cba6e0667d\"\n },\n \"output\": {\n \"payloadRef\": \"QmWj3tr2aTHqnRYovhS2mQAjYneRtMWJSU4M4RdAJpJwEC\"\n },\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/operation/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the operation UUID namespace The namespace of the operation string tx The UUID of the FireFly transaction the operation is part of UUID type The type of the operation FFEnum:\"blockchain_pin_batch\"\"blockchain_network_action\"\"blockchain_deploy\"\"blockchain_invoke\"\"sharedstorage_upload_batch\"\"sharedstorage_upload_blob\"\"sharedstorage_upload_value\"\"sharedstorage_download_batch\"\"sharedstorage_download_blob\"\"dataexchange_send_batch\"\"dataexchange_send_blob\"\"token_create_pool\"\"token_activate_pool\"\"token_transfer\"\"token_approval\" status The current status of the operation OpStatus plugin The plugin responsible for performing the operation string input The input to this operation JSONObject output Any output reported back from the plugin for this operation JSONObject error Any error reported back from the plugin for this operation string created The time the operation was created FFTime updated The last update time of the operation FFTime retry If this operation was initiated as a retry to a previous operation, this field points to the UUID of the operation being retried UUID"},{"location":"reference/types/operationwithdetail/","title":"OperationWithDetail","text":"Operation with detail is an extension to operations that allow additional information to be encapsulated with an operation. An operation can be supplemented by a connector and that information will be returned in the detail field.
{\n \"id\": \"04a8b0c4-03c2-4935-85a1-87d17cddc20a\",\n \"namespace\": \"ns1\",\n \"tx\": \"99543134-769b-42a8-8be4-a5f8873f969d\",\n \"type\": \"sharedstorage_upload_batch\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ipfs\",\n \"input\": {\n \"id\": \"80d89712-57f3-48fe-b085-a8cba6e0667d\"\n },\n \"output\": {\n \"payloadRef\": \"QmWj3tr2aTHqnRYovhS2mQAjYneRtMWJSU4M4RdAJpJwEC\"\n },\n \"created\": \"2022-05-16T01:23:15Z\",\n \"detail\": {\n \"created\": \"2023-01-27T17:04:24.26406392Z\",\n \"firstSubmit\": \"2023-01-27T17:04:24.419913295Z\",\n \"gas\": \"4161076\",\n \"gasPrice\": \"0\",\n \"history\": [\n {\n \"actions\": [\n {\n \"action\": \"AssignNonce\",\n \"count\": 1,\n \"lastOccurrence\": \"\",\n \"time\": \"\"\n },\n {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"time\": \"2023-01-27T17:11:41.161213303Z\"\n },\n {\n \"action\": \"Submit\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \"time\": \"2023-01-27T17:11:41.222374636Z\"\n }\n ],\n \"subStatus\": \"Received\",\n \"time\": \"2023-01-27T17:11:41.122965803Z\"\n },\n {\n \"actions\": [\n {\n \"action\": \"ReceiveReceipt\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:11:47.930332625Z\",\n \"time\": \"2023-01-27T17:11:47.930332625Z\"\n },\n {\n \"action\": \"Confirm\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:12:02.660275549Z\",\n \"time\": \"2023-01-27T17:12:02.660275549Z\"\n }\n ],\n \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:11:41.222400219Z\"\n },\n {\n \"actions\": [],\n \"subStatus\": \"Confirmed\",\n \"time\": \"2023-01-27T17:12:02.660309382Z\"\n }\n ],\n \"historySummary\": [\n {\n \"count\": 1,\n \"subStatus\": \"Received\"\n },\n {\n \"action\": \"AssignNonce\",\n \"count\": 1\n },\n {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1\n },\n {\n \"action\": \"Submit\",\n \"count\": 1\n },\n {\n \"count\": 1,\n \"subStatus\": \"Tracking\"\n },\n {\n \"action\": \"ReceiveReceipt\",\n \"count\": 1\n },\n {\n \"action\": \"Confirm\",\n \"count\": 1\n },\n {\n \"count\": 1,\n \"subStatus\": \"Confirmed\"\n }\n ],\n \"sequenceId\": \"0185f42f-fec8-93df-aeba-387417d477e0\",\n \"status\": \"Succeeded\",\n \"transactionHash\": \"0xfb39178fee8e725c03647b8286e6f5cb13f982abf685479a9ee59e8e9d9e51d8\"\n }\n}\n"},{"location":"reference/types/operationwithdetail/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the operation UUID namespace The namespace of the operation string tx The UUID of the FireFly transaction the operation is part of UUID type The type of the operation FFEnum:\"blockchain_pin_batch\"\"blockchain_network_action\"\"blockchain_deploy\"\"blockchain_invoke\"\"sharedstorage_upload_batch\"\"sharedstorage_upload_blob\"\"sharedstorage_upload_value\"\"sharedstorage_download_batch\"\"sharedstorage_download_blob\"\"dataexchange_send_batch\"\"dataexchange_send_blob\"\"token_create_pool\"\"token_activate_pool\"\"token_transfer\"\"token_approval\" status The current status of the operation OpStatus plugin The plugin responsible for performing the operation string input The input to this operation JSONObject output Any output reported back from the plugin for this operation JSONObject error Any error reported back from the plugin for this operation string created The time the operation was created FFTime updated The last update time of the operation FFTime retry If this operation was initiated as a retry to a previous operation, this field points to the UUID of the operation being retried UUID detail Additional detailed information about an operation provided by the connector ``"},{"location":"reference/types/simpletypes/","title":"Simple Types","text":""},{"location":"reference/types/simpletypes/#uuid","title":"UUID","text":"IDs are generated as UUID V4 globally unique identifiers
"},{"location":"reference/types/simpletypes/#fftime","title":"FFTime","text":"Times are serialized to JSON on the API in RFC 3339 / ISO 8601 nanosecond UTC time for example 2022-05-05T21:19:27.454767543Z.
Note that JavaScript can parse this format happily into millisecond time with Date.parse().
Times are persisted as a nanosecond resolution timestamps in the database.
On input, and in queries, times can be parsed from RFC3339, or unix timestamps (second, millisecond or nanosecond resolution).
"},{"location":"reference/types/simpletypes/#ffbigint","title":"FFBigInt","text":"Large integers of up to 256bits in size are common in blockchain, and handled in FireFly.
In JSON output payloads in FireFly, including events, they are serialized as strings (with base 10).
On input you can provide JSON string (string with an 0x prefix are parsed at base 16), or a JSON number.
v1.3.1","text":"In versions of FireFly up to and including v1.3.1, be careful when using large JSON numbers. The largest number that is safe to transfer using a JSON number is 2^53 - 1 and it is possible to receive errors from the transaction manager, or for precision to be silently lost when passing numeric parameters larger than that. It is recommended to pass large numbers as strings to avoid loss of precision.
v1.3.2 and higher","text":"In FireFly v1.3.2 support was added for 256-bit precision JSON numbers. Some application frameworks automatically serialize large JSON numbers to a string which FireFly already supports, but there is no upper limit to the size of a number that can be represented in JSON. FireFly now supports much larger JSON numbers, up to 256-bit precision. For example the following input parameter to a contract constructor is now supported:
...\n \"definition\": [{\n \"inputs\": [\n {\n \"internalType\":\" uint256\",\n \"name\": \"x\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\":[],\n \"type\":\"constructor\"\n }],\n \"params\": [ 10000000000000000000000000 ]\n ...\n Some application frameworks seralize large numbers in scientific notation e.g. 1e+25. FireFly v1.3.2 added supported for handling scientific numbers in parameters. This removes the need to change an application that uses this number format. For example the following input parameter to a contract constructor is now supported:
...\n \"definition\": [{\n \"inputs\": [\n {\n \"internalType\":\" uint256\",\n \"name\": \"x\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\":[],\n \"type\":\"constructor\"\n }],\n \"params\": [ 1e+25 ]\n ...\n"},{"location":"reference/types/simpletypes/#jsonany","title":"JSONAny","text":"Any JSON type. An object, array, string, number, boolean or null.
FireFly stores object data with the same field order as was provided on the input, but with any whitespace removed.
"},{"location":"reference/types/simpletypes/#jsonobject","title":"JSONObject","text":"Any JSON Object. Must be an object, rather than an array or a simple type.
"},{"location":"reference/types/subscription/","title":"Subscription","text":"Each Subscription tracks delivery of events to a particular application, and allows FireFly to ensure that messages are delivered reliably to that application.
"},{"location":"reference/types/subscription/#creating-a-subscription","title":"Creating a subscription","text":"Before you can connect to a subscription, you must create it via the REST API.
One special case where you do not need to do this, is Ephemeral WebSocket connections (described below). For these you can just connect and immediately start receiving events.
When creating a new subscription, you give it a name which is how you will refer to it when you connect.
You are also able to specify server-side filtering that should be performed against the event stream, to limit the set of events that are sent to your application.
All subscriptions are created within a namespace, and automatically filter events to only those emitted within that namespace.
You can create multiple subscriptions for your application, to request different sets of server-side filtering for events. You can then request FireFly to deliver events for both subscriptions over the same WebSocket (if you are using the WebSocket transport). However, delivery order is not assured between two subscriptions.
"},{"location":"reference/types/subscription/#subscriptions-and-workload-balancing","title":"Subscriptions and workload balancing","text":"You can have multiple scaled runtime instances of a single application, all running in parallel. These instances of the application all share a single subscription.
Each event is only delivered once to the subscription, regardless of how many instances of your application connect to FireFly.
With multiple WebSocket connections active on a single subscription, each event might be delivered to different instance of your application. This means workload is balanced across your instances. However, each event still needs to be acknowledged, so delivery processing order can still be maintained within your application database state.
If you have multiple different applications all needing their own copy of the same event, then you need to configure a separate subscription for each application.
"},{"location":"reference/types/subscription/#pluggable-transports","title":"Pluggable Transports","text":"Hyperledger FireFly has two built-in transports for delivery of events to applications - WebSockets and Webhooks.
The event interface is fully pluggable, so you can extend connectivity over an external event bus - such as NATS, Apache Kafka, Rabbit MQ, Redis etc.
"},{"location":"reference/types/subscription/#websockets","title":"WebSockets","text":"If your application has a back-end server runtime, then WebSockets are the most popular option for listening to events. WebSockets are well supported by all popular application development frameworks, and are very firewall friendly for connecting applications into your FireFly server.
Check out the @hyperledger/firefly-sdk SDK for Node.js applications, and the hyperledger/firefly-common module for Golang applications. These both contain reliable WebSocket clients for your event listeners.
A Java SDK is a roadmap item for the community.
"},{"location":"reference/types/subscription/#websocket-protocol","title":"WebSocket protocol","text":"FireFly has a simple protocol on top of WebSockets:
namespace and name query parameter in the URL when you connect, along with query params for other fields of WSStartautoack in step (1)The SDK libraries for FireFly help you ensure you send the start payload each time your WebSocket reconnects.
start and ack explicitly","text":"Here's an example websocat command showing an explicit start and ack.
$ websocat ws://localhost:5000/ws\n{\"type\":\"start\",\"namespace\":\"default\",\"name\":\"docexample\"}\n# ... for each event that arrives here, you send an ack ...\n{\"type\":\"ack\",\"id\":\"70ed4411-57cf-4ba1-bedb-fe3b4b5fd6b6\"}\n When creating your subscription, you can set readahead in order to ask FireFly to stream a number of messages to your application, ahead of receiving the acknowledgements.
readahead can be a powerful tool to increase performance, but does require your application to ensure it processes events in the correct order and sends exactly one ack for each event.
autoack","text":"Here's an example websocat where we use URL query parameters to avoid the need to send a start JSON payload.
We also use autoack so that events just keep flowing from the server.
$ websocat \"ws://localhost:5000/ws?namespace=default&name=docexample&autoack\"\n# ... events just keep arriving here, as the server-side auto-acknowledges\n# the events as it delivers them to you.\n Note using autoack means you can miss events in the case of a disconnection, so should not be used for production applications that require at-least-once delivery.
FireFly WebSockets provide a special option to create a subscription dynamically, that only lasts for as long as you are connected to the server.
We call these ephemeral subscriptions.
Here's an example websocat command showing an an ephemeral subscription - notice we don't specify a name for the subscription, and there is no need to have already created the subscription beforehand.
Here we also include an extra query parameter to set a server-side filter, to only include message events.
$ websocat \"ws://localhost:5000/ws?namespace=default&ephemeral&autoack&filter.events=message_.*\"\n{\"type\":\"start\",\"namespace\":\"default\",\"name\":\"docexample\"}\n# ... for each event that arrives here, you send an ack ...\n{\"type\":\"ack\",\"id\":\"70ed4411-57cf-4ba1-bedb-fe3b4b5fd6b6\"}\n Ephemeral subscriptions are very convenient for experimentation, debugging and monitoring. However, they do not give reliable delivery because you only receive events that occur while you are connected. If you disconnect and reconnect, you will miss all events that happened while your application was not listening.
"},{"location":"reference/types/subscription/#webhooks","title":"Webhooks","text":"The Webhook transport allows FireFly to make HTTP calls against your application's API when events matching your subscription are emitted.
This means the direction of network connection is from the FireFly server, to the application (the reverse of WebSockets). Conversely it means you don't need to add any connection management code to your application - just expose and API that FireFly can call to process the events.
Webhooks are great for serverless functions (AWS Lambda etc.), integrations with SaaS applications, and calling existing APIs.
The FireFly configuration options for a Webhook subscription are very flexible, allowing you to customize your HTTP requests as follows:
2xx HTTP status code or other error, you should enable and configure options.retryfastack to acknowledge against FireFly immediately and make multiple parallel calls to the HTTP API in a fire-and-forget fashion.message_confirmed events:data element in message eventswithData to be set on the subscription, in addition to the input.* configuration optionsmessage_confirmed events:cid and topic in the reply message to match the requesttag in the reply message, per the configuration, or dynamically based on a field in the input request data.Webhooks have the ability to batch events into a single HTTP request instead of sending an event per HTTP request. The interface will be a JSON array of events instead of a top level JSON object with a single event. The size of the batch will be set by the readAhead limit and an optional timeout can be specified to send the events when the batch hasn't filled.
To enable this set the following configuration under SubscriptionOptions
batch | Events are delivered in batches in an ordered array. The batch size is capped to the readAhead limit. The event payload is always an array even if there is a single event in the batch. Commonly used with Webhooks to allow events to be delivered and acknowledged in batches. | bool |
batchTimeout | When batching is enabled, the optional timeout to send events even when the batch hasn't filled. Defaults to 2 seconds | string
NOTE: When batch is enabled, withData cannot be used as these may alter the HTTP request based on a single event and in batching it does not make sense for now.
{\n \"id\": \"c38d69fd-442e-4d6f-b5a4-bab1411c7fe8\",\n \"namespace\": \"ns1\",\n \"name\": \"app1\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"^(message_.*|token_.*)$\",\n \"message\": {\n \"tag\": \"^(red|blue)$\"\n },\n \"transaction\": {},\n \"blockchainevent\": {}\n },\n \"options\": {\n \"firstEvent\": \"newest\",\n \"readAhead\": 50\n },\n \"created\": \"2022-05-16T01:23:15Z\",\n \"updated\": null\n}\n"},{"location":"reference/types/subscription/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the subscription UUID namespace The namespace of the subscription. A subscription will only receive events generated in the namespace of the subscription string name The name of the subscription. The application specifies this name when it connects, in order to attach to the subscription and receive events that arrived while it was disconnected. If multiple apps connect to the same subscription, events are workload balanced across the connected application instances string transport The transport plugin responsible for event delivery (WebSockets, Webhooks, JMS, NATS etc.) string filter Server-side filter to apply to events SubscriptionFilter options Subscription options SubscriptionOptions ephemeral Ephemeral subscriptions only exist as long as the application is connected, and as such will miss events that occur while the application is disconnected, and cannot be created administratively. You can create one over over a connected WebSocket connection bool created Creation time of the subscription FFTime updated Last time the subscription was updated FFTime"},{"location":"reference/types/subscription/#subscriptionfilter","title":"SubscriptionFilter","text":"Field Name Description Type events Regular expression to apply to the event type, to subscribe to a subset of event types string message Filters specific to message events. If an event is not a message event, these filters are ignored MessageFilter transaction Filters specific to events with a transaction. If an event is not associated with a transaction, this filter is ignored TransactionFilter blockchainevent Filters specific to blockchain events. If an event is not a blockchain event, these filters are ignored BlockchainEventFilter topic Regular expression to apply to the topic of the event, to subscribe to a subset of topics. Note for messages sent with multiple topics, a separate event is emitted for each topic string topics Deprecated: Please use 'topic' instead string tag Deprecated: Please use 'message.tag' instead string group Deprecated: Please use 'message.group' instead string author Deprecated: Please use 'message.author' instead string"},{"location":"reference/types/subscription/#messagefilter","title":"MessageFilter","text":"Field Name Description Type tag Regular expression to apply to the message 'header.tag' field string group Regular expression to apply to the message 'header.group' field string author Regular expression to apply to the message 'header.author' field string"},{"location":"reference/types/subscription/#transactionfilter","title":"TransactionFilter","text":"Field Name Description Type type Regular expression to apply to the transaction 'type' field string"},{"location":"reference/types/subscription/#blockchaineventfilter","title":"BlockchainEventFilter","text":"Field Name Description Type name Regular expression to apply to the blockchain event 'name' field, which is the name of the event in the underlying blockchain smart contract string listener Regular expression to apply to the blockchain event 'listener' field, which is the UUID of the event listener. So you can restrict your subscription to certain blockchain listeners. Alternatively to avoid your application need to know listener UUIDs you can set the 'topic' field of blockchain event listeners, and use a topic filter on your subscriptions string"},{"location":"reference/types/subscription/#subscriptionoptions","title":"SubscriptionOptions","text":"Field Name Description Type firstEvent Whether your application would like to receive events from the 'oldest' event emitted by your FireFly node (from the beginning of time), or the 'newest' event (from now), or a specific event sequence. Default is 'newest' SubOptsFirstEvent readAhead The number of events to stream ahead to your application, while waiting for confirmation of consumption of those events. At least once delivery semantics are used in FireFly, so if your application crashes/reconnects this is the maximum number of events you would expect to be redelivered after it restarts uint withData Whether message events delivered over the subscription, should be packaged with the full data of those messages in-line as part of the event JSON payload. Or if the application should make separate REST calls to download that data. May not be supported on some transports. bool batch Events are delivered in batches in an ordered array. The batch size is capped to the readAhead limit. The event payload is always an array even if there is a single event in the batch, allowing client-side optimizations when processing the events in a group. Available for both Webhooks and WebSockets. bool batchTimeout When batching is enabled, the optional timeout to send events even when the batch hasn't filled. string fastack Webhooks only: When true the event will be acknowledged before the webhook is invoked, allowing parallel invocations bool url Webhooks only: HTTP url to invoke. Can be relative if a base URL is set in the webhook plugin config string method Webhooks only: HTTP method to invoke. Default=POST string json Webhooks only: Whether to assume the response body is JSON, regardless of the returned Content-Type bool reply Webhooks only: Whether to automatically send a reply event, using the body returned by the webhook bool replytag Webhooks only: The tag to set on the reply message string replytx Webhooks only: The transaction type to set on the reply message string headers Webhooks only: Static headers to set on the webhook request `` query Webhooks only: Static query params to set on the webhook request `` tlsConfigName The name of an existing TLS configuration associated to the namespace to use string input Webhooks only: A set of options to extract data from the first JSON input data in the incoming message. Only applies if withData=true WebhookInputOptions retry Webhooks only: a set of options for retrying the webhook call WebhookRetryOptions httpOptions Webhooks only: a set of options for HTTP WebhookHTTPOptions"},{"location":"reference/types/subscription/#webhookinputoptions","title":"WebhookInputOptions","text":"Field Name Description Type query A top-level property of the first data input, to use for query parameters string headers A top-level property of the first data input, to use for headers string body A top-level property of the first data input, to use for the request body. Default is the whole first body string path A top-level property of the first data input, to use for a path to append with escaping to the webhook path string replytx A top-level property of the first data input, to use to dynamically set whether to pin the response (so the requester can choose) string"},{"location":"reference/types/subscription/#webhookretryoptions","title":"WebhookRetryOptions","text":"Field Name Description Type enabled Enables retry on HTTP calls, defaults to false bool count Number of times to retry the webhook call in case of failure int initialDelay Initial delay between retries when we retry the webhook call string maxDelay Max delay between retries when we retry the webhookcall string"},{"location":"reference/types/subscription/#webhookhttpoptions","title":"WebhookHTTPOptions","text":"Field Name Description Type proxyURL HTTP proxy URL to use for outbound requests to the webhook string tlsHandshakeTimeout The max duration to hold a TLS handshake alive string requestTimeout The max duration to hold a TLS handshake alive string maxIdleConns The max number of idle connections to hold pooled int idleTimeout The max duration to hold a HTTP keepalive connection between calls string connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted. string expectContinueTimeout See ExpectContinueTimeout in the Go docs string"},{"location":"reference/types/tokenapproval/","title":"TokenApproval","text":"A token approval is a record that an address other than the owner of a token balance, has been granted authority to transfer tokens on the owners behalf.
The approved \"operator\" (or \"spender\") account might be a smart contract, or another individual.
FireFly provides APIs for initiating and tracking approvals, which token connectors integrate with the implementation of the underlying token.
The off-chain index maintained in FireFly for allowance allows you to quickly find the most recent allowance event associated with a pair of keys, using the subject field, combined with the active field. When a new Token Approval event is delivered to FireFly Core by the Token Connector, any previous approval for the same subject is marked \"active\": false, and the new approval is marked with \"active\": true
The token connector is responsible for the format of the subject field to reflect the owner / operator (spender) relationship.
{\n \"localId\": \"1cd3e2e2-dd6a-441d-94c5-02439de9897b\",\n \"pool\": \"1244ecbe-5862-41c3-99ec-4666a18b9dd5\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad\",\n \"operator\": \"0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"approved\": true,\n \"info\": {\n \"owner\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad\",\n \"spender\": \"0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"value\": \"115792089237316195423570985008687907853269984665640564039457584007913129639935\"\n },\n \"namespace\": \"ns1\",\n \"protocolId\": \"000000000032/000000/000000\",\n \"subject\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad:0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"active\": true,\n \"created\": \"2022-05-16T01:23:15Z\",\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"4b6e086d-0e31-482d-9683-cd18b2045031\"\n }\n}\n"},{"location":"reference/types/tokenapproval/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type localId The UUID of this token approval, in the local FireFly node UUID pool The UUID the token pool this approval applies to UUID connector The name of the token connector, as specified in the FireFly core configuration file. Required on input when there are more than one token connectors configured string key The blockchain signing key for the approval request. On input defaults to the first signing key of the organization that operates the node string operator The blockchain identity that is granted the approval string approved Whether this record grants permission for an operator to perform actions on the token balance (true), or revokes permission (false) bool info Token connector specific information about the approval operation, such as whether it applied to a limited balance of a fungible token. See your chosen token connector documentation for details JSONObject namespace The namespace for the approval, which must match the namespace of the token pool string protocolId An alphanumerically sortable string that represents this event uniquely with respect to the blockchain string subject A string identifying the parties and entities in the scope of this approval, as provided by the token connector string active Indicates if this approval is currently active (only one approval can be active per subject) bool message The UUID of a message that has been correlated with this approval using the data field of the approval in a compatible token connector UUID messageHash The hash of a message that has been correlated with this approval using the data field of the approval in a compatible token connector Bytes32 created The creation time of the token approval FFTime tx If submitted via FireFly, this will reference the UUID of the FireFly transaction (if the token connector in use supports attaching data) TransactionRef blockchainEvent The UUID of the blockchain event UUID config Input only field, with token connector specific configuration of the approval. See your chosen token connector documentation for details JSONObject"},{"location":"reference/types/tokenapproval/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/tokenpool/","title":"TokenPool","text":"Token pools are a FireFly construct for describing a set of tokens.
The total supply of a particular fungible token, or a group of related non-fungible tokens.
The exact definition of a token pool is dependent on the token connector implementation.
Check the documentation for your chosen connector implementation to see the detailed options for configuring a new Token Pool.
Note that it is very common to use a Token Pool to teach Hyperledger FireFly about an existing token, so that you can start interacting with a token that already exists.
"},{"location":"reference/types/tokenpool/#example-token-pool-types","title":"Example token pool types","text":"Some examples of how the generic concept of a Token Pool maps to various well-defined Ethereum standards:
These are provided as examples only - a custom token connector could be backed by any token technology (Ethereum or otherwise) as long as it can support the basic operations described here (create pool, mint, burn, transfer). Other FireFly repos include a sample implementation of a token connector for ERC-20 and ERC-721 as well as ERC-1155.
"},{"location":"reference/types/tokenpool/#example","title":"Example","text":"{\n \"id\": \"90ebefdf-4230-48a5-9d07-c59751545859\",\n \"type\": \"fungible\",\n \"namespace\": \"ns1\",\n \"name\": \"my_token\",\n \"standard\": \"ERC-20\",\n \"locator\": \"address=0x056df1c53c3c00b0e13d37543f46930b42f71db0\\u0026schema=ERC20WithData\\u0026type=fungible\",\n \"decimals\": 18,\n \"connector\": \"erc20_erc721\",\n \"message\": \"43923040-b1e5-4164-aa20-47636c7177ee\",\n \"active\": true,\n \"created\": \"2022-05-16T01:23:15Z\",\n \"info\": {\n \"address\": \"0x056df1c53c3c00b0e13d37543f46930b42f71db0\",\n \"name\": \"pool8197\",\n \"schema\": \"ERC20WithData\"\n },\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"a23ffc87-81a2-4cbc-97d6-f53d320c36cd\"\n },\n \"published\": false\n}\n"},{"location":"reference/types/tokenpool/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the token pool UUID type The type of token the pool contains, such as fungible/non-fungible FFEnum:\"fungible\"\"nonfungible\" namespace The namespace for the token pool string name The name of the token pool. Note the name is not validated against the description of the token on the blockchain string networkName The published name of the token pool within the multiparty network string standard The ERC standard the token pool conforms to, as reported by the token connector string locator A unique identifier for the pool, as provided by the token connector string key The signing key used to create the token pool. On input for token connectors that support on-chain deployment of new tokens (vs. only index existing ones) this determines the signing key used to create the token on-chain string symbol The token symbol. If supplied on input for an existing on-chain token, this must match the on-chain information string decimals Number of decimal places that this token has int connector The name of the token connector, as specified in the FireFly core configuration file that is responsible for the token pool. Required on input when multiple token connectors are configured string message The UUID of the broadcast message used to inform the network about this pool UUID active Indicates whether the pool has been successfully activated with the token connector bool created The creation time of the pool FFTime config Input only field, with token connector specific configuration of the pool, such as an existing Ethereum address and block number to used to index the pool. See your chosen token connector documentation for details JSONObject info Token connector specific information about the pool. See your chosen token connector documentation for details JSONObject tx Reference to the FireFly transaction used to create and broadcast this pool to the network TransactionRef interface A reference to an existing FFI, containing pre-registered type information for the token contract FFIReference interfaceFormat The interface encoding format supported by the connector for this token pool FFEnum:\"abi\"\"ffi\" methods The method definitions resolved by the token connector to be used by each token operation JSONAny published Indicates if the token pool is published to other members of the multiparty network bool"},{"location":"reference/types/tokenpool/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/tokenpool/#ffireference","title":"FFIReference","text":"Field Name Description Type id The UUID of the FireFly interface UUID name The name of the FireFly interface string version The version of the FireFly interface string"},{"location":"reference/types/tokentransfer/","title":"TokenTransfer","text":"A Token Transfer is created for each transfer of value that happens under a token pool.
The transfers form an off-chain audit history (an \"index\") of the transactions that have been performed on the blockchain.
This historical information cannot be queried directly from the blockchain for most token implementations, because it is inefficient to use the blockchain to store complex data structures like this. So the blockchain simply emits events when state changes, and if you want to be able to query this historical information you need to track it in your own off-chain database.
Hyperledger FireFly maintains this index automatically for all Token Pools that are configured.
"},{"location":"reference/types/tokentransfer/#firefly-initiated-vs-non-firefly-initiated-transfers","title":"FireFly initiated vs. non-FireFly initiated transfers","text":"There is no requirement at all to use FireFly to initiate transfers in Token Pools that Hyperledger FireFly is aware of. FireFly will listen to and update its audit history and balances for all transfers, regardless of whether they were initiated using a FireFly Supernode or not.
So you could for example use Metamask to initiate a transfer directly against an ERC-20/ERC-721 contract directly on your blockchain, and you will see it appear as a transfer. Or initiate a transfer on-chain via another Smart Contract, such as a Hashed Timelock Contract (HTLC) releasing funds held in digital escrow.
"},{"location":"reference/types/tokentransfer/#message-coordinated-transfers","title":"Message coordinated transfers","text":"One special feature enabled when using FireFly to initiate transfers, is to coordinate an off-chain data transfer (private or broadcast) with the on-chain transfer of value. This is a powerful tool to allow transfers to have rich metadata associated that is too sensitive (or too large) to include on the blockchain itself.
These transfers have a message associated with them, and require a compatible Token Connector and on-chain Smart Contract that allows a data payload to be included as part of the transfer, and to be emitted as part of the transfer event.
Examples of how to do this are included in the ERC-20, ERC-721 and ERC-1155 Token Connector sample smart contracts.
"},{"location":"reference/types/tokentransfer/#transfer-types","title":"Transfer types","text":"There are three primary types of transfer:
from address will be unset for these transfer types.to address will be unset for these transfer types.from and to addresses are both set for these type of transfers.Note that the key that signed the Transfer transaction might be different to the from account that is the owner of the tokens before the transfer.
The Approval resource is used to track which signing accounts (other than the owner) have approval to transfer tokens on the owner's behalf.
"},{"location":"reference/types/tokentransfer/#example","title":"Example","text":"{\n \"type\": \"transfer\",\n \"pool\": \"1244ecbe-5862-41c3-99ec-4666a18b9dd5\",\n \"uri\": \"firefly://token/1\",\n \"connector\": \"erc20_erc721\",\n \"namespace\": \"ns1\",\n \"key\": \"0x55860105D6A675dBE6e4d83F67b834377Ba677AD\",\n \"from\": \"0x55860105D6A675dBE6e4d83F67b834377Ba677AD\",\n \"to\": \"0x55860105D6A675dBE6e4d83F67b834377Ba677AD\",\n \"amount\": \"1000000000000000000\",\n \"protocolId\": \"000000000041/000000/000000\",\n \"message\": \"780b9b90-e3b0-4510-afac-b4b1f2940b36\",\n \"messageHash\": \"780204e634364c42779920eddc8d9fecccb33e3607eeac9f53abd1b31184ae4e\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"62767ca8-99f9-439c-9deb-d80c6672c158\"\n },\n \"blockchainEvent\": \"b57fcaa2-156e-4c3f-9b0b-ddec9ee25933\"\n}\n"},{"location":"reference/types/tokentransfer/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type The type of transfer such as mint/burn/transfer FFEnum:\"mint\"\"burn\"\"transfer\" localId The UUID of this token transfer, in the local FireFly node UUID pool The UUID the token pool this transfer applies to UUID tokenIndex The index of the token within the pool that this transfer applies to string uri The URI of the token this transfer applies to string connector The name of the token connector, as specified in the FireFly core configuration file. Required on input when there are more than one token connectors configured string namespace The namespace for the transfer, which must match the namespace of the token pool string key The blockchain signing key for the transfer. On input defaults to the first signing key of the organization that operates the node string from The source account for the transfer. On input defaults to the value of 'key' string to The target account for the transfer. On input defaults to the value of 'key' string amount The amount for the transfer. For non-fungible tokens will always be 1. For fungible tokens, the number of decimals for the token pool should be considered when inputting the amount. For example, with 18 decimals a fractional balance of 10.234 will be specified as 10,234,000,000,000,000,000 FFBigInt protocolId An alphanumerically sortable string that represents this event uniquely with respect to the blockchain string message The UUID of a message that has been correlated with this transfer using the data field of the transfer in a compatible token connector UUID messageHash The hash of a message that has been correlated with this transfer using the data field of the transfer in a compatible token connector Bytes32 created The creation time of the transfer FFTime tx If submitted via FireFly, this will reference the UUID of the FireFly transaction (if the token connector in use supports attaching data) TransactionRef blockchainEvent The UUID of the blockchain event UUID config Input only field, with token connector specific configuration of the transfer. See your chosen token connector documentation for details JSONObject"},{"location":"reference/types/tokentransfer/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/transaction/","title":"Transaction","text":"FireFly Transactions are a grouping construct for a number of Operations and Events that need to complete or fail as unit.
FireFly Transactions are not themselves Blockchain transactions, but in many cases there is exactly one Blockchain transaction associated with each FireFly transaction. Exceptions include unpinned transactions, where there is no blockchain transaction at all.
The Blockchain native transaction ID is stored in the FireFly transaction object when it is known. However, the FireFly transaction starts before a Blockchain transaction exists - because reliably submitting the blockchain transaction is one of the operations that is performed inside of the FireFly transaction.
The below screenshot from the FireFly Explorer nicely illustrates how multiple operations and events are associated with a FireFly transaction. In this example, the transaction tracking is pinning of a batch of messages stored in IPFS to the blockchain.
So there is a Blockchain ID for the transaction - as there is just one Blockchain transaction regardless of how many messages in the batch. There are operations for the submission of that transaction, and the upload of the data to IPFS. Then a corresponding Blockchain Event Received event for the detection of the event from the blockchain smart contract when the transaction was mined, and a Message Confirmed event for each message in the batch (in this case 1). Then here the message was a special Definition message that advertised a new Contract API to all members of the network - so there is a Contract API Confirmed event as well.
Each FireFly transaction has a UUID. This UUID is propagated through to all participants in a FireFly transaction. For example in a Token Transfer that is coordinated with an off-chain private Message, the transaction ID is propagated to all parties who are part of that transaction. So the same UUID can be used to find the transaction in the FireFly Explorer of any member who has access to the message. This is possible because hash-pinned off-chain data is associated with that on-chain transfer.
However, in the case of a raw ERC-20/ERC-721 transfer (without data), or any other raw Blockchain transaction, the FireFly transaction UUID cannot be propagated - so it will be local on the node that initiated the transaction.
"},{"location":"reference/types/transaction/#example","title":"Example","text":"{\n \"id\": \"4e7e0943-4230-4f67-89b6-181adf471edc\",\n \"namespace\": \"ns1\",\n \"type\": \"contract_invoke\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"blockchainIds\": [\n \"0x34b0327567fefed09ac7b4429549bc609302b08a9cbd8f019a078ec44447593d\"\n ]\n}\n"},{"location":"reference/types/transaction/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the FireFly transaction UUID namespace The namespace of the FireFly transaction string type The type of the FireFly transaction FFEnum:\"none\"\"unpinned\"\"batch_pin\"\"network_action\"\"token_pool\"\"token_transfer\"\"contract_deploy\"\"contract_invoke\"\"contract_invoke_pin\"\"token_approval\"\"data_publish\" created The time the transaction was created on this node. Note the transaction is individually created with the same UUID on each participant in the FireFly transaction FFTime idempotencyKey An optional unique identifier for a transaction. Cannot be duplicated within a namespace, thus allowing idempotent submission of transactions to the API IdempotencyKey blockchainIds The blockchain transaction ID, in the format specific to the blockchain involved in the transaction. Not all FireFly transactions include a blockchain. FireFly transactions are extensible to support multiple blockchain transactions string[]"},{"location":"reference/types/verifier/","title":"Verifier","text":"A verifier is a cryptographic verification mechanism for an identity in FireFly.
FireFly generally defers verification of these keys to the lower layers of technologies in the stack - the blockchain (Fabric, Ethereum etc.) or Data Exchange technology.
As such the details of the public key cryptography scheme are not represented in the FireFly verifiers. Only the string identifier of the verifier that is appropriate to the technology.
{\n \"hash\": \"6818c41093590b862b781082d4df5d4abda6d2a4b71d737779edf6d2375d810b\",\n \"identity\": \"114f5857-9983-46fb-b1fc-8c8f0a20846c\",\n \"type\": \"ethereum_address\",\n \"value\": \"0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/verifier/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type hash Hash used as a globally consistent identifier for this namespace + type + value combination on every node in the network Bytes32 identity The UUID of the parent identity that has claimed this verifier UUID namespace The namespace of the verifier string type The type of the verifier FFEnum:\"ethereum_address\"\"tezos_address\"\"fabric_msp_id\"\"dx_peer_id\" value The verifier string, such as an Ethereum address, or Fabric MSP identifier string created The time this verifier was created on this node FFTime"},{"location":"reference/types/wsack/","title":"WSAck","text":"An ack must be sent on a WebSocket for each event delivered to an application.
Unless autoack is set in the WSStart payload/URL parameters to cause automatic acknowledgement.
Your application should specify the id of each event that it acknowledges.
If the id is omitted, then FireFly will assume the oldest message delivered to the application that has not been acknowledged is the one the ack is associated with.
If multiple subscriptions are started on a WebSocket, then you need to specify the subscription namespace+name as part of each ack.
If you send an acknowledgement that cannot be correlated, then a WSError payload will be sent to the application.
"},{"location":"reference/types/wsack/#example","title":"Example","text":"{\n \"type\": \"ack\",\n \"id\": \"f78bf82b-1292-4c86-8a08-e53d855f1a64\",\n \"subscription\": {\n \"namespace\": \"ns1\",\n \"name\": \"app1_subscription\"\n }\n}\n"},{"location":"reference/types/wsack/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type WSActionBase.type FFEnum:\"start\"\"ack\"\"protocol_error\"\"event_batch\" id WSAck.id UUID subscription WSAck.subscription SubscriptionRef"},{"location":"reference/types/wsack/#subscriptionref","title":"SubscriptionRef","text":"Field Name Description Type id The UUID of the subscription UUID namespace The namespace of the subscription. A subscription will only receive events generated in the namespace of the subscription string name The name of the subscription. The application specifies this name when it connects, in order to attach to the subscription and receive events that arrived while it was disconnected. If multiple apps connect to the same subscription, events are workload balanced across the connected application instances string"},{"location":"reference/types/wserror/","title":"WSError","text":""},{"location":"reference/types/wserror/#example","title":"Example","text":"{\n \"type\": \"protocol_error\",\n \"error\": \"FF10175: Acknowledgment does not match an inflight event + subscription\"\n}\n"},{"location":"reference/types/wserror/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type WSAck.type FFEnum:\"start\"\"ack\"\"protocol_error\"\"event_batch\" error WSAck.error string"},{"location":"reference/types/wsstart/","title":"WSStart","text":"The start payload is sent after an application connects to a WebSocket, to start delivery of events over that connection.
The start command can refer to a subscription by name in order to reliably receive all matching events for that subscription, including those that were emitted when the application was disconnected.
Alternatively the start command can request \"ephemeral\": true in order to dynamically create a new subscription that lasts only for the duration that the connection is active.
{\n \"type\": \"start\",\n \"autoack\": false,\n \"namespace\": \"ns1\",\n \"name\": \"app1_subscription\",\n \"ephemeral\": false,\n \"filter\": {\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {}\n },\n \"options\": {}\n}\n"},{"location":"reference/types/wsstart/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type WSActionBase.type FFEnum:\"start\"\"ack\"\"protocol_error\"\"event_batch\" autoack WSStart.autoack bool namespace WSStart.namespace string name WSStart.name string ephemeral WSStart.ephemeral bool filter WSStart.filter SubscriptionFilter options WSStart.options SubscriptionOptions"},{"location":"reference/types/wsstart/#subscriptionfilter","title":"SubscriptionFilter","text":"Field Name Description Type events Regular expression to apply to the event type, to subscribe to a subset of event types string message Filters specific to message events. If an event is not a message event, these filters are ignored MessageFilter transaction Filters specific to events with a transaction. If an event is not associated with a transaction, this filter is ignored TransactionFilter blockchainevent Filters specific to blockchain events. If an event is not a blockchain event, these filters are ignored BlockchainEventFilter topic Regular expression to apply to the topic of the event, to subscribe to a subset of topics. Note for messages sent with multiple topics, a separate event is emitted for each topic string topics Deprecated: Please use 'topic' instead string tag Deprecated: Please use 'message.tag' instead string group Deprecated: Please use 'message.group' instead string author Deprecated: Please use 'message.author' instead string"},{"location":"reference/types/wsstart/#messagefilter","title":"MessageFilter","text":"Field Name Description Type tag Regular expression to apply to the message 'header.tag' field string group Regular expression to apply to the message 'header.group' field string author Regular expression to apply to the message 'header.author' field string"},{"location":"reference/types/wsstart/#transactionfilter","title":"TransactionFilter","text":"Field Name Description Type type Regular expression to apply to the transaction 'type' field string"},{"location":"reference/types/wsstart/#blockchaineventfilter","title":"BlockchainEventFilter","text":"Field Name Description Type name Regular expression to apply to the blockchain event 'name' field, which is the name of the event in the underlying blockchain smart contract string listener Regular expression to apply to the blockchain event 'listener' field, which is the UUID of the event listener. So you can restrict your subscription to certain blockchain listeners. Alternatively to avoid your application need to know listener UUIDs you can set the 'topic' field of blockchain event listeners, and use a topic filter on your subscriptions string"},{"location":"reference/types/wsstart/#subscriptionoptions","title":"SubscriptionOptions","text":"Field Name Description Type firstEvent Whether your application would like to receive events from the 'oldest' event emitted by your FireFly node (from the beginning of time), or the 'newest' event (from now), or a specific event sequence. Default is 'newest' SubOptsFirstEvent readAhead The number of events to stream ahead to your application, while waiting for confirmation of consumption of those events. At least once delivery semantics are used in FireFly, so if your application crashes/reconnects this is the maximum number of events you would expect to be redelivered after it restarts uint withData Whether message events delivered over the subscription, should be packaged with the full data of those messages in-line as part of the event JSON payload. Or if the application should make separate REST calls to download that data. May not be supported on some transports. bool batch Events are delivered in batches in an ordered array. The batch size is capped to the readAhead limit. The event payload is always an array even if there is a single event in the batch, allowing client-side optimizations when processing the events in a group. Available for both Webhooks and WebSockets. bool batchTimeout When batching is enabled, the optional timeout to send events even when the batch hasn't filled. string fastack Webhooks only: When true the event will be acknowledged before the webhook is invoked, allowing parallel invocations bool url Webhooks only: HTTP url to invoke. Can be relative if a base URL is set in the webhook plugin config string method Webhooks only: HTTP method to invoke. Default=POST string json Webhooks only: Whether to assume the response body is JSON, regardless of the returned Content-Type bool reply Webhooks only: Whether to automatically send a reply event, using the body returned by the webhook bool replytag Webhooks only: The tag to set on the reply message string replytx Webhooks only: The transaction type to set on the reply message string headers Webhooks only: Static headers to set on the webhook request `` query Webhooks only: Static query params to set on the webhook request `` tlsConfigName The name of an existing TLS configuration associated to the namespace to use string input Webhooks only: A set of options to extract data from the first JSON input data in the incoming message. Only applies if withData=true WebhookInputOptions retry Webhooks only: a set of options for retrying the webhook call WebhookRetryOptions httpOptions Webhooks only: a set of options for HTTP WebhookHTTPOptions"},{"location":"reference/types/wsstart/#webhookinputoptions","title":"WebhookInputOptions","text":"Field Name Description Type query A top-level property of the first data input, to use for query parameters string headers A top-level property of the first data input, to use for headers string body A top-level property of the first data input, to use for the request body. Default is the whole first body string path A top-level property of the first data input, to use for a path to append with escaping to the webhook path string replytx A top-level property of the first data input, to use to dynamically set whether to pin the response (so the requester can choose) string"},{"location":"reference/types/wsstart/#webhookretryoptions","title":"WebhookRetryOptions","text":"Field Name Description Type enabled Enables retry on HTTP calls, defaults to false bool count Number of times to retry the webhook call in case of failure int initialDelay Initial delay between retries when we retry the webhook call string maxDelay Max delay between retries when we retry the webhookcall string"},{"location":"reference/types/wsstart/#webhookhttpoptions","title":"WebhookHTTPOptions","text":"Field Name Description Type proxyURL HTTP proxy URL to use for outbound requests to the webhook string tlsHandshakeTimeout The max duration to hold a TLS handshake alive string requestTimeout The max duration to hold a TLS handshake alive string maxIdleConns The max number of idle connections to hold pooled int idleTimeout The max duration to hold a HTTP keepalive connection between calls string connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted. string expectContinueTimeout See ExpectContinueTimeout in the Go docs string"},{"location":"releasenotes/","title":"Release Notes","text":"Full release notes
"},{"location":"releasenotes/#v133-mar-25-2025","title":"v1.3.3 - Mar 25, 2025","text":"What's New: - Add new interface for blockchain plugins to stream receipt notifications in transactional batches - For blockchain connectors that have an ack based reliable receipt stream (or other checkpoint system) - Allows strictly ordered delivery of receipts from blockchain plugins that support it - Allows resilience on receipt delivery to core, against a checkpoint maintained in the connector - Changes in metrics: - Added new metrics for Data Exchange for monitoring by a timeseries and alerting system. - ff_multiparty_node_identity_dx_mismatch notify that the certificate in FireFly Core is different to the one stored in Data Exchange - ff_multiparty_node_identity_dx_expiry_epoch emit the timestamp of the certificate of Data Exchange useful for SREs to monitor before it expires - Added a namespace label to existing metrics to separate metrics more easily - Added HTTP Response Time and Complete Gauge Support to firefly-common - Allow the metrics server to host additional routes such as status endpoints - This resulted in a new configuration section of monitoring to be more appropriate than metrics which has now be deprecated. - Fix to issue that resulted in retried private messages using local namespace rather than the network namespace - Fix to issue that could result in messages being marked Pending on re-delivery of a batch over the network - Miscellaneous bug fixes and minor improvements - Documentation updates, new troubleshooting section for multiparty messages - CVE fixes and adoption of OpenSSF scorecard on key repositories
As part of the changes to the metrics to add the new namespace label, we changed from using a Prometheus Counter to a CounterVec. As a result there is no default value of 0 on the counter, which means when users query for a specific metric such as ff_message_rejected_total it will not be available until the CounterVec associated with that metric is incremented. This has been determined to be an easy upgrade for SRE monitoring these metrics, hence inclusion in a patch release.
What's New:
2^53-1What's New:
/status/multipartyMigration guide
What's New:
Migration guide
What's New:
X-FireFly-Request-ID HTTP header is now passed through to FireFly dependency microservicesMigration guide
What's New:
What's New:
What's New:
What's New:
This release includes lots of major hardening, performance improvements, and bug fixes, as well as more complete documentation and OpenAPI specifications.
What's New:
What's New:
What's New:
What's New:
What's New:
Hyperledger FireFly v1.1.0 is a feature release that includes significant new functionality around namespaces and plugins, as detailed in FIR-12. As a result, upgrading an existing FireFly environment from any prior release may require special steps (depending on the functionality used).
If seamless data preservation is not required, you can simply create a new network from scratch using FireFly v1.1.0.
If you want to preserve data from an existing 1.0.x network, significant care has been taken to ensure that it is possible. Most existing environments can be upgraded with minimal extra steps. This document attempts to call out all potentially breaking changes (both common and uncommon), so that you can easily assess the impact of the upgrade and any needed preparation before proceeding.
"},{"location":"releasenotes/1.1_migration_guide/#before-upgrading","title":"Before Upgrading","text":"These steps are all safe to do while running FireFly v1.0.x. While they do not have to be done prior to upgrading, performing them ahead of time may allow you to preemptively fix some problems and ease the migration to v1.1.0.
"},{"location":"releasenotes/1.1_migration_guide/#common-steps","title":"Common Steps","text":"Upgrade to latest v1.0.x patch release
Before upgrading to v1.1.0, it is strongly recommended to upgrade to the latest v1.0.x patch release (v1.0.4 as of the writing this document). Do not proceed any further in this guide until all nodes are successfully running the latest patch release version.
Fix any deprecated config usage
All items in FireFly's YAML config that were deprecated at any time in the v1.0.x line will be unsupported in v1.1.0. After upgrading to the latest v1.0.x patch release, you should therefore look for any deprecation warnings when starting FireFly, and ensure they are fixed before upgrading to v1.1.0. Failure to do so will cause your config file to be rejected in v1.1.0, and FireFly will fail to start.
You can utilize the ffconfig tool to automatically check and fix deprecated config with a command such as:
ffconfig migrate -f <input-file> -o <output-file> --to 1.0.4\n This should ensure your config file is acceptable to 1.0.x or 1.1.x.
Note that if you are attempting to migrate a Dockerized development environment (such as one stood up by the firefly-cli), you may need to edit the config file inside the Docker. Environments created by a v1.0.x CLI do not expose the config file outside the Docker container.
"},{"location":"releasenotes/1.1_migration_guide/#less-common-situations","title":"Less Common Situations","text":"Record all broadcast namespaces in the config file
Expand for migration details only if your application uses non-default namespaces. FireFly v1.0 allowed for the dynamic creation of new namespaces by broadcasting a namespace definition to all nodes. This functionality is _removed_ in v1.1.0. If your network relies on any namespaces that were created via a broadcast, you must add those namespaces to the `namespaces.predefined` list in your YAML config prior to upgrade. If you do not, they will cease to function after upgrading to v1.1.0 (all events on those namespaces will be ignored by your node).Identify queries for organization/node identities
Expand for migration details only if your application queries/network/organizations or /network/nodes. Applications that query `/network/organizations` or `/network/nodes` will temporarily receive _empty result lists_ after upgrading to v1.1.0, just until all identities have been re-registered (see steps in \"After Upgrading\"). This is because organization and node identities were broadcast on a global \"ff_system\" namespace in v1.0, but are no longer global in v1.1.0. The simplest solution is to shut down applications until the FireFly upgrade is complete on all nodes and all identities have been re-broadcast. If this poses a problem and you require zero downtime from these APIs, you can proactively mitigate with the following steps in your application code: - Applications that query the `/network/organizations` may be altered to _also_ query `/namespaces/ff_system/network/organizations` and combine the results (but should disregard the second query if it fails). - Applications that query the `/network/nodes` may be altered to _also_ query `/namespaces/ff_system/network/nodes` and combine the results (but should disregard the second query if it fails). Further details on the changes to `/network` APIs are provided in the next section. Identify usage of changed APIs
Expand for migration details on all changes to/namespaces, /status, and /network APIs. The primary API change in this version is that the \"global\" paths beginning with `/network` and `/status` have been relocated under the `/namespaces/{ns}` prefix, as this data is now specific to a namespace instead of being global. At the same time, the API server has been enhanced so that omitting a namespace from an API path will _query the default namespace_ instead. That is, querying `/messages` is now the same as querying `/namespaces/default/messages` (assuming your default namespace is named \"default\"). This has the effect that most of the moved APIs will continue to function without requiring changes. See below for details on the affected paths. These global routes have been moved under `/namespaces/{ns}`. Continuing to use them without the namespace prefix **will still work**, and will simply query the default namespace. /network/diddocs/{did}\n/network/nodes\n/network/nodes/{nameOrId}\n/network/nodes/self\n/network/organizations\n/network/organizations/{nameOrId}\n/network/organizations/self\n/status\n/status/batchmanager\n These global routes have been moved under `/namespaces/{ns}` and have also been deprecated in favor of a new route name. Continuing to use them without the namespace prefix **will still work**, and will simply query the default namespace. However, it is recommended to switch to the new API spelling when possible. /network/identities - replaced by existing /namespaces/{ns}/identities\n/network/identities/{did} - replaced by new /namespaces/{ns}/identities/{did}\n These global routes have been have been permanently renamed. They are deemed less likely to be used by client applications, but any usage **will be broken** by this release and must be changed after upgrading. /status/pins - moved to /namespaces/{ns}/pins (or /pins to query the default namespace)\n/status/websockets - moved to /websockets\n The response bodies of the following APIs have also had fields removed. Any usage of the removed fields **will be broken** by this release and must be changed after upgrading. /namespaces - removed all fields except \"name\", \"description\", \"created\"\n/namespaces/{ns} - same as above\n/namespaces/{ns}/status - removed \"defaults\"\n Adjust or remove usage of admin APIs
Expand for migration details on all changes to/admin and /spi. FireFly provides an administrative API in addition to the normal API. In v1.1.0, this has been renamed to SPI (Service Provider Interface). Consequently, all of the routes have moved from `/admin` to `/spi`, and the config section has been renamed from `admin` to `spi`. There is no automatic migration provided, so any usage of the old routes will need to be changed, and your config file will need to be adjusted if you wish to keep the SPI enabled (although it is perfectly fine to have both `admin` and `spi` sections if needed for migration). The ability to set FireFly config via these routes has also been removed. Any usage of the `/admin/config` routes must be discontinued, and config should be set exclusively by editing the FireFly config file. The only route retained from this functionality was `/admin/config/reset`, which has been renamed to `/spi/reset` - this will continue to be available for performing a soft reset that reloads FireFly's config."},{"location":"releasenotes/1.1_migration_guide/#performing-the-upgrade","title":"Performing the Upgrade","text":"Backup current data
Before beginning the upgrade, it is recommended to take a full backup of your FireFly database(s). If you encounter any serious issues after the upgrade, you should revert to the old binary and restore your database snapshot. While down-migrations are provided to revert a database in place, they are not guaranteed to work in all scenarios.
Upgrade FireFly and all dependencies
Bring FireFly down and replace it with the new v1.1.0 binary. You should also replace other runtimes (such as blockchain, data exchange, and token connectors) with the supported versions noted in the v1.1.0 release. Once all binaries have been replaced, start them up again.
"},{"location":"releasenotes/1.1_migration_guide/#after-upgrading","title":"After Upgrading","text":"Ensure nodes start without errors
Ensure that FireFly starts without errors. There will likely be new deprecation warnings for config that was deprecated in v1.1.0, but these are safe to ignore for the moment. If you face any errors or crashes, please report the logs to the FireFly channel on Discord, and return your nodes to running the previous version of FireFly if necessary.
Re-broadcast organization and node identities
Once all nodes in the multiparty network have been upgraded and are running without errors, each node should re-broadcast its org and node identity by invoking /network/organizations/self and /network/nodes/self (or, if your application uses a non-default namespace, by invoking the /namespace/{ns}-prefixed versions of these APIs).
This will ensure that queries to /network/organizations and /network/nodes return the expected results, and will register the identities in a way that can be supported by both V1 and V2 multiparty contracts (see \"Upgrading the Multi-Party Contract\").
Update config file to latest format
Once the network is stable, you should update your config file(s) again to remove deprecated configuration and set yourself up to take advantage of all the new configuration options available in v1.1.0.
You can utilize the ffconfig tool to automatically check and fix deprecated config with a command such as:
ffconfig migrate -f <input-file> -o <output-file>\n"},{"location":"releasenotes/1.1_migration_guide/#upgrading-the-multi-party-contract","title":"Upgrading the Multi-Party Contract","text":"FireFly v1.1.0 includes a new recommended version of the contract used for multi-party systems (for both Ethereum and Fabric). It also introduces a versioning method for this contract, and a path for migrating networks from one contract address to a new one.
After upgrading FireFly itself, it is recommended to upgrade your multi-party system to the latest contract version by following these steps.
ff deploy or a similar method.namespaces:\n predefined:\n - name: default\n multiparty:\n enabled: true\n contract:\n - location:\n address: 0x09f107d670b2e69a700a4d9ef1687490ae1568db\n - location:\n address: 0x1bee32b37dc48e99c6b6bf037982eb3bee0e816b\n This example assumes 0x09f1... represents the address of the original contract, and 0x1bee... represents the new one. Note that if you have multiple namespaces, you must repeat this step for each namespace in the config - and you must deploy a unique contract instance per namespace (in the new network rules, multiple namespaces cannot share a single contract).
/namespaces/{ns}/network/action FireFly API with a body of {\"type\": \"terminate\"}. This will terminate the old contract and instruct all members to move simultaneously to the newly configured one./namespaces/{ns}/status on each node and checking that the active multi-party contract matches the new address.Hyperledger FireFly v1.2.0 is a feature release that includes new features for tokens and data management as well as enhancements for debugging FireFly apps and operating FireFly nodes.
For the most part, upgrading from v1.1.x to v.1.2.0 should be a seamless experience, but there are several important things to note about changes between the two versions, which are described in detail on this page.
"},{"location":"releasenotes/1.2_migration_guide/#tokens-considerations","title":"Tokens considerations","text":"There are quite a few new features around tokens in FireFly v1.2.0. Most notably, FireFly's token APIs now work with a much wider variety of ERC-20, ERC-721, and ERC-1155 contracts, supporting variations of these contracts generated by the OpenZepplin Contract Wizard.
"},{"location":"releasenotes/1.2_migration_guide/#sample-token-contract-deprecations","title":"Sample token contract deprecations","text":"In FireFly v1.2.0 two of the old, lesser used sample token contracts have been deprecated. The ERC20NoData and ERC721NoData contracts have been updated and the previous versions are no longer supported, unless you set the USE_LEGACY_ERC20_SAMPLE=true or USE_LEGACY_ERC721_SAMPLE=true environment variables for your token connector.
For more details you can read the description of the pull requests (#104 and #109) where these changes were made.
"},{"location":"releasenotes/1.2_migration_guide/#differences-from-v110","title":"Differences from v1.1.0","text":""},{"location":"releasenotes/1.2_migration_guide/#optional-fields","title":"Optional fields","text":"Some token connectors support some optional fields when using them with certain contracts. For example, the ERC-721 token connector supports a URI field. If these optional fields are specified in an API call to a token connector and contract that does not support that field, an error will be returned, rather than the field being silently ignored.
"},{"location":"releasenotes/1.2_migration_guide/#auto-incrementing-token-index","title":"Auto incrementing token index","text":"In FireFly v1.2.0 the default ERC-721 and ERC-1155 contracts have changed to automatically increment the token index when a token is minted. This is useful when many tokens may be minted around the same time, or by different minters. This lets the blockchain handle the ordering, and keeping track of the state of which token index should be minted next, rather than making that an application concern.
NOTE: These new contracts will only be used for brand new FireFly stacks with v1.2.0. If you have an existing stack, the new token contracts will not be used, unless you specifically deploy them and start using them.
"},{"location":"releasenotes/1.2_migration_guide/#data-management-considerations","title":"Data management considerations","text":"FireFly v1.2.0 introduces the ability to delete data records and their associated blobs, if present. This will remove the data and blob rows from the FireFly database, as well as removing the blob from the Data Exchange microservice. This can be very useful if your organization has data retention requirements for sensitive, private data and needs to purge data after a certain period of time.
Please note that this API only removes data from the FireFly node on which it is called. If data has been shared with other participants of a multi-party network, it is each participants' responsibility to satisfy their own data retention policies.
"},{"location":"releasenotes/1.2_migration_guide/#differences-from-v110_1","title":"Differences from v1.1.0","text":"It is important to note that FireFly now stores a separate copy of a blob for a given payload, even if the same data object is sent in different messages, by different network participants. Previously, in FireFly v1.1.0 the blob was de-duplicated in some cases. In FireFly v1.2.0, deleting the data object will result in each copy of the associated payload being removed.
NOTE: If data has been published to IPFS, it cannot be deleted completely. You can still call the DELETE method on it, and it will be removed from FireFly's database and Data Exchange, but the payload will still persist in IPFS.
Please see the optional token fields section above for details. If your application code is calling any token API endpoints with optional fields that are not supported by your token connector or contract, you will need to remove those fields from your API request or it will fail.
"},{"location":"releasenotes/1.2_migration_guide/#transaction-output-details","title":"Transaction output details","text":"In previous versions of FireFly, transaction output details used to appear under the output object in the response body. Behind the scenes, some of this data is now fetched from the blockchain connector asynchronously. If your application needs the detailed output, it should now add a fetchStatus=true query parameter when querying for an Operation. Additionally the details have moved from the output field to a new detail field on the response body. For more details, please refer to the PRs where this change was made (#1111 and #1151). For a detailed example comparing what an Operation response body looks like in FireFly v1.2.0 compared with v1.1.x, you can expand the sections below.
\n{\n \"id\": \"2b0ec132-2abd-40f0-aa56-79871a7a23b9\",\n \"namespace\": \"default\",\n \"tx\": \"cb0e6de1-50a9-44f2-a2ff-411f6dcc19c9\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n \"idempotencyKey\": \"5a634941-29cb-4a4b-b5a7-196331723d6d\",\n \"input\": {\n \"newValue\": 42\n },\n \"interface\": \"46189886-cae5-42ff-bf09-25d4f58d649e\",\n \"key\": \"0x2ecd8d5d97fb4bb7af0fbc27d7b89fd6f0366350\",\n \"location\": {\n \"address\": \"0x9d7ea8561d4b21cba495d1bd29a6d3421c31cf8f\"\n },\n \"method\": {\n \"description\": \"\",\n \"id\": \"d1d2a0cf-19ea-42c3-89b8-cb65850fb9c5\",\n \"interface\": \"46189886-cae5-42ff-bf09-25d4f58d649e\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"details\": {\n \"type\": \"uint256\"\n },\n \"type\": \"integer\"\n }\n }\n ],\n \"pathname\": \"set\",\n \"returns\": []\n },\n \"methodPath\": \"set\",\n \"options\": null,\n \"type\": \"invoke\"\n },\n \"output\": {\n \"Headers\": {\n \"requestId\": \"default:2b0ec132-2abd-40f0-aa56-79871a7a23b9\",\n \"type\": \"TransactionSuccess\"\n },\n \"protocolId\": \"000000000052/000000\",\n \"transactionHash\": \"0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\"\n },\n \"created\": \"2023-01-24T14:08:17.371587084Z\",\n \"updated\": \"2023-01-24T14:08:17.385558417Z\",\n \"detail\": {\n \"created\": \"2023-01-24T14:08:17.378147625Z\",\n \"firstSubmit\": \"2023-01-24T14:08:17.381787042Z\",\n \"gas\": \"42264\",\n \"gasPrice\": 0,\n \"history\": [\n {\n \"count\": 1,\n \"info\": \"Success=true,Receipt=000000000052/000000,Confirmations=0,Hash=0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\",\n \"lastOccurrence\": null,\n \"time\": \"2023-01-24T14:08:17.384371042Z\"\n },\n {\n \"count\": 1,\n \"info\": \"Submitted=true,Receipt=,Hash=0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\",\n \"lastOccurrence\": null,\n \"time\": \"2023-01-24T14:08:17.381908959Z\"\n }\n ],\n \"id\": \"default:2b0ec132-2abd-40f0-aa56-79871a7a23b9\",\n \"lastSubmit\": \"2023-01-24T14:08:17.381787042Z\",\n \"nonce\": \"34\",\n \"policyInfo\": null,\n \"receipt\": {\n \"blockHash\": \"0x7a2ca7cc57fe1eb4ead3e60d3030b123667d18eb67f4b390fb0f51f970f1fba0\",\n \"blockNumber\": \"52\",\n \"extraInfo\": {\n \"contractAddress\": null,\n \"cumulativeGasUsed\": \"28176\",\n \"from\": \"0x2ecd8d5d97fb4bb7af0fbc27d7b89fd6f0366350\",\n \"gasUsed\": \"28176\",\n \"status\": \"1\",\n \"to\": \"0x9d7ea8561d4b21cba495d1bd29a6d3421c31cf8f\"\n },\n \"protocolId\": \"000000000052/000000\",\n \"success\": true,\n \"transactionIndex\": \"0\"\n },\n \"sequenceId\": \"0185e41b-ade2-67e4-c104-5ff553135320\",\n \"status\": \"Succeeded\",\n \"transactionData\": \"0x60fe47b1000000000000000000000000000000000000000000000000000000000000002a\",\n \"transactionHash\": \"0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\",\n \"transactionHeaders\": {\n \"from\": \"0x2ecd8d5d97fb4bb7af0fbc27d7b89fd6f0366350\",\n \"to\": \"0x9d7ea8561d4b21cba495d1bd29a6d3421c31cf8f\"\n },\n \"updated\": \"2023-01-24T14:08:17.384371042Z\"\n }\n}\n v1.1.x Operation response body \n{\n \"id\": \"4a1a19cf-7fd2-43f1-8fae-1e3d5774cf0d\",\n \"namespace\": \"default\",\n \"tx\": \"2978a248-f5df-4c78-bf04-711ab9c79f3d\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n \"idempotencyKey\": \"5dc2ee8a-be5c-4e60-995f-9e21818a441d\",\n \"input\": {\n \"newValue\": 42\n },\n \"interface\": \"752af5a3-d383-4952-88a9-b32b837ed1cb\",\n \"key\": \"0xd8a27cb390fd4f446acce01eb282c7808ec52572\",\n \"location\": {\n \"address\": \"0x7c0a598252183999754c53d97659af9436293b82\"\n },\n \"method\": {\n \"description\": \"\",\n \"id\": \"1739f25d-ab48-4534-b278-58c4cf151bf9\",\n \"interface\": \"752af5a3-d383-4952-88a9-b32b837ed1cb\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"details\": {\n \"type\": \"uint256\"\n },\n \"type\": \"integer\"\n }\n }\n ],\n \"pathname\": \"set\",\n \"returns\": []\n },\n \"methodPath\": \"set\",\n \"options\": null,\n \"type\": \"invoke\"\n },\n \"output\": {\n \"_id\": \"default:4a1a19cf-7fd2-43f1-8fae-1e3d5774cf0d\",\n \"blockHash\": \"0x13660667b69f48646025a87db603abdeeaa88036e9a1252b1af4ec1fc3e1d850\",\n \"blockNumber\": \"52\",\n \"cumulativeGasUsed\": \"28176\",\n \"from\": \"0xd8a27cb390fd4f446acce01eb282c7808ec52572\",\n \"gasUsed\": \"28176\",\n \"headers\": {\n \"id\": \"8dfaabd1-4493-4a64-52dd-762497022ba2\",\n \"requestId\": \"default:4a1a19cf-7fd2-43f1-8fae-1e3d5774cf0d\",\n \"requestOffset\": \"\",\n \"timeElapsed\": 0.109499833,\n \"timeReceived\": \"2023-01-24T17:16:52.372449013Z\",\n \"type\": \"TransactionSuccess\"\n },\n \"nonce\": \"0\",\n \"receivedAt\": 1674580612482,\n \"status\": \"1\",\n \"to\": \"0x7c0a598252183999754c53d97659af9436293b82\",\n \"transactionHash\": \"0x522e5aac000f5befba61ddfd707aaf5c61314f47e00cd0c5b779f69dd14bd899\",\n \"transactionIndex\": \"0\"\n },\n \"created\": \"2023-01-24T17:16:52.368498346Z\",\n \"updated\": \"2023-01-24T17:16:52.48408293Z\"\n}\n"},{"location":"releasenotes/1.2_migration_guide/#local-development-considerations","title":"Local development considerations","text":"It is also worth noting that the default Ethereum blockchain connector in the FireFly CLI is now Evmconnect. Ethconnect is still fully supported, but FireFly v1.2.0 marks a point of maturity in the project where it is now the recommended choice for any Ethereum based FireFly stack.
"},{"location":"releasenotes/1.3_migration_guide/","title":"v1.3.0 Migration Guide","text":""},{"location":"releasenotes/1.3_migration_guide/#overview","title":"Overview","text":"Hyperledger FireFly v1.3.0 is a feature release that includes changes around event streaming, contract listeners, define/publish APIs as well as a range of general fixes.
For the most part, upgrading from v1.2.x to v1.3.0 should be a seamless experience, but there are several important things to note about changes between the two versions, which are described in detail on this page.
"},{"location":"releasenotes/1.3_migration_guide/#docker-image-file-permission-considerations","title":"Docker image file permission considerations","text":"Following security best practices, the official published Docker images for FireFly Core and all of its microservices now run as a non-root user by default. If you are running a FireFly release prior to v1.3.0, depending on how you were running your containers, you may need to adjust file permissions inside volumes that these containers write to. If you have overridden the default user for your containers (for example though a Kubernetes deployment) you may safely ignore this section.
\u26a0\ufe0f Warning: If you have been using the default root user and upgrade to FireFly v1.3.0 without changing these file permissions your services may fail to start.
The new default user is 1001. If you are not overriding the user for your container, this user or group needs to have write permissions in several places. The list of services and directories you should specifically check are:
persistence.leveldb.path directory set in the config filerest.rest-gateway.openapi.storagePath directory in the config filerest.rest-gateway.openapi.eventsDB directory in the config filereceipts.leveldb.path directory in the config fileevents.leveldb.path directory in the config fileDATA_DIRECTORY environment variable (default /data)As of FireFly v1.3.0 in multi-party namespaces, by default, contract interfaces, contracts APIs, and token pools have distinct steps in their creation flow and by default they are unpublished.
These following described changes impact contract interfaces, contract APIs, and token pools.
Previously, when creating one of the affected resources in a multi-party network, if successful, the resource would be automatically broadcasted to other namespaces. In FireFly v1.3.0, this behaviour has changed, now when one of the resources is created there are 2 distinct states for the resource, published and unpublished. The default state for a resource (provided FireFly is not told otherwise) after creation is unpublished.
When a resource is unpublished it is not broadcasted to other namespaces in the multi-party network, and it is not pinned to the blockchain. In this state, it is possible to call the DELETE APIs to remove the resource (such as in the case where configuration needs to be changed) and reclaim the name that has been provided to it, so that it can be recreated.
When a resource is published it is broadcasted to other namespaces in the multi-party network, and it is pinned to the blockchain. In this state, it is no longer possible to call the DELETE APIs to remove the resource.
In FireFly v1.2.0 to create one of the affected resources and publish it to other parties, a POST call would be made to its respective API route and the broadcast would happen immediately. To achieve the same behaviour in FireFly v1.3.0, there are 2 options for all impacted resources, either providing a query parameter at creation to signal immediate publish, or a subsequent API call to publish the resources.
Previously, to create a contract interface a POST call would be made to /contracts/interfaces and the interface would be broadcasted to all other namepsaces. In FireFly v1.3.0, this same call can be made with the publish=true query parameter, or a subsequent API call can be made on an unpublished interface on POST /contracts/interfaces/{name}/{version}/publish specifying the name and version of the interface.
For an exact view of the changes to contract interfaces, see PR #1279.
"},{"location":"releasenotes/1.3_migration_guide/#contract-apis","title":"Contract APIs","text":"Previously, to create a contract API a POST call would be made to /apis and the API would be broadcasted to all other namepsaces. In FireFly v1.3.0, this same call can be made with the publish=true query parameter, or a subsequent API call can be made on an unpublished API on /apis/{apiName}/publish specifying the name of the API.
For an exact view of the changes to contract APIs, see PR #1322.
"},{"location":"releasenotes/1.3_migration_guide/#token-pools","title":"Token pools","text":"Previously, to create a token pool a POST call would be made to /tokens/pools and the token pool would be broadcasted to all other namepsaces. In FireFly v1.3.0, this same call can be made with the publish=true query parameter, or a subsequent API call can be made on an unpublished token pool on /tokens/pools/{nameOrId}/publish specifying the name or ID of the token pool.
For an exact view of the changes to token pools, see PR #1261.
"},{"location":"releasenotes/1.3_migration_guide/#event-stream-considerations","title":"Event stream considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#single-event-stream-per-namespace","title":"Single event stream per namespace","text":"In this release, the model for event streams in a multi-party network has fundamentally changed. Previously, there was a single event stream for each blockchain plugin, even if this plugin served multiple namespaces. In FireFly v1.3.0 there is now a single event stream per namespace in the network.
When migrating from FireFly v1.2.X to v1.3.0, due to these changes, existing event streams will be rebuilt. This means that connectors will replay past events to FireFly, but FireFly will automatically de-duplicate them by design so this is a safe operation.
The migration to individual event streams promotes high-availability capability but is not itself a breaking change, however the ID format for event streams has changed. Event streams now follow the format <plugin_topic_name>/<namespace_name>. For example, an event stream for the default namespace with a plugin topic of 0 would now be: 0/default.
Summarily, these changes should not impact end-users of FireFly, but they're noted here as they are significant architectural changes to the relationships between namespaces, plugins, and connectors.
For an exact view of the changes, see PR #1388.
"},{"location":"releasenotes/1.3_migration_guide/#configuration-considerations","title":"Configuration considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#deprecated-configuration","title":"Deprecated configuration","text":"In FireFly v1.3.0 deprecated configuration options for the blockchain, database, dataexchange, sharedstorage and tokens plugins have been removed, and can no longer be provided.
For an exact view of the changes, see PR #1289.
"},{"location":"releasenotes/1.3_migration_guide/#token-pool-considerations","title":"Token pool considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#activity-indicator-changes","title":"Activity indicator changes","text":"Token pools have a status, when creating a token pool previously, it would go into a pending state immediately following creation, and then into a confirmed state when it has been confirmed on the chain. This behaviour is still consistent in FireFly v1.3.0, but the representation of the data has changed.
Previously, token pools had a state field with an enumerated value which was either pending, or confirmed, this has been replaced with an active boolean field, where true indicates the token pool has been committed onto chain, and false indicated the transaction has not yet been confirmed.
For an exact view of the changes, see PR #1305.
"},{"location":"releasenotes/1.3_migration_guide/#fabconnect-event-considerations","title":"FabConnect event considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#fabconnect-protocol-id-format-changes","title":"FabConnect Protocol ID format changes","text":"Prior to FireFly v1.3.0, when the FabConnect client indexed events submitted by the Fabric SDK, FireFly would deduplicate events into a single event because the protocol ID of the events compiled into a single block would evaluate to be the same. In this release, we have changed the format of the calculated protocol ID so that is unique across events even if they are located within the same block. Crucially, the new format includes the transaction hash, so events are no longer alphanumerically sortable.
For an exact view of the changes, see PR #1345.
"},{"location":"releasenotes/1.3_migration_guide/#local-development-considerations","title":"Local development considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#go-version-upgrade","title":"Go version upgrade","text":"FireFly v1.3.0 now uses Go 1.21 across all modules.
"},{"location":"swagger/","title":"API Spec","text":"This is the FireFly OpenAPI Specification document generated by FireFly
Note: The 'Try it out' buttons will not work on this page because it's not running against a live version of FireFly. To actually try it out, we recommend using the FireFly CLI to start an instance on your local machine (which will start the FireFly core on port 5000 by default) and then open the Swagger UI associated with your local node by opening a new tab and visiting http://localhost:5000/api
"},{"location":"troubleshooting/","title":"Troubleshooting","text":"This section includes troubleshooting tips for identifying issues with a running FireFly node, and for gathering useful data before opening an issue.
"},{"location":"troubleshooting/undelivered_messages/","title":"Undelivered messages","text":"When using FireFly in multiparty mode to deliver broadcast or private messages, one potential problem is that of undelivered messages. In general FireFly's message delivery service should be extremely reliable, but understanding when something has gone wrong (and how to recover) can be important for maintaining system health.
"},{"location":"troubleshooting/undelivered_messages/#background","title":"Background","text":"This guide assumes some familiarity with how multiparty event sequencing works. In general, FireFly messages come in three varieties:
All messages are batched for efficiency, but in cases of low throughput, you may frequently see batches containing exactly one message.
\"Pinned\" messages are those that use the blockchain ledger for reliable timestamping and ordering. These messages have two pieces which must be received before the message can be processed: the batch is the actual contents of the message(s), and the pin is the lightweight blockchain transaction that records the existence and ordering of that batch. We frequently refer to this combination as a batch-pin.
Note: there is a fourth type of message denoted with the type \"definition\", used for things such as identitity claims and advertisement of contract APIs. For most troubleshooting purposes these can be treated the same as pinned broadcast messages, as they follow the same pattern (with only a few additional processings steps inside FireFly).
"},{"location":"troubleshooting/undelivered_messages/#symptoms","title":"Symptoms","text":"When some part of the multiparty messaging infrastructure requires troubleshooting, common symptoms include:
When troubleshooting one of the symptoms above, the main goal is to identify the specific piece of the infrastructure that is experiencing an issue. This can lead you to diagnose specific issues such as misconfiguration, network problems, database integrity problems, or potential code bugs.
In all cases, the batch ID is the most critical piece of data for determining the nature of the issue. You can usually retrieve the batch for a particular message by querying /messages/<message-id> and looking for the batch field in the returned response. In rare cases, if this is not populated, you can also retrieve the message transaction via /messages/<message-id>/transaction, and then you can use the transaction ID to query /batches?tx.id=<transaction-id>.
The batch ID will be the same on all nodes involved in the messaging flow. Therefore, the following two steps can be easily performed to check for the existence of the expected items:
/batches/<batch-id> on each node that should have the message/pins?batch=<batch-id> on each node that should have the message (for pinned messages only)Then choose one of these scenarios to focus in on an area of interest:
"},{"location":"troubleshooting/undelivered_messages/#1-is-the-batch-missing-on-a-node-that-should-have-received-it","title":"1) Is the batch missing on a node that should have received it?","text":"For private messages, this indicates a potential problem with data exchange. Check the sending node to see if the FireFly operations succeeded when sending the batch via data exchange, and check the data exchange logs for any issues processing it (the FireFly operation ID can be used to trace the operation through data exchange as well). If an operation failed on the sending node, you may need to retry it with /operations/<op-id>/retry.
For broadcast messages, this indicates a potential problem with IPFS. Check the sending node to see if the FireFly operations succeeded when uploading the batch to IPFS, and the receiving node to see if the operations succeeded when downloading the batch from IPFS. If an operation failed, you may need to retry it with /operations/<op-id>/retry.
This indicates a potential problem with the blockchain connector. Check if the underlying blockchain node is healthy and mining blocks. Check the sending FireFly node to see if the operation succeeded when pinning the batch via the blockchain. Check the blockchain connector logs (such as evmconnect or fabconnect) to see if it is successfully processing events from the blockchain, or if it is encountering any errors before forwarding those events on to FireFly.
"},{"location":"troubleshooting/undelivered_messages/#3-are-the-batch-and-pin-both-present-but-the-messages-from-the-batch-are-still-stuck-in-sent-or-pending","title":"3) Are the batch and pin both present, but the messages from the batch are still stuck in \"sent\" or \"pending\"?","text":"Check the pin details to see if it contains a field \"dispatched\": true. If this field is false or missing, it means that the pin was received but couldn't be matched successfully with the off-chain batch contents. Check the FireFly logs and search for the batch ID - likely this issue is in FireFly and it will have logged some problem while aggregating the batch-pin. In some cases, the FireFly logs may indicate that the pin could not be dispatched because it was \"stuck\" behind another pin on the same context - so you may need to follow the trail to a batch-pin for a different batch and determine why that earlier one was not processed (by starting over on this rubric and troubleshooting that batch).
It's possible that the above steps may lead to an obvious solution (such as recovering a crashed service or retrying a failed operation). If they do not, you can open an issue. The more detail you can include from the troubleshooting above (including the type of message, the nodes involved, and the details on the batch and pin found when examining each node), the more likely it is that someone can help to suggest additional troubleshooting. Full logs from FireFly, and (as deemed relevant from the troubleshooting above) full logs from the data exchange or blockchain connector runtimes, will also make it easier to offer additional insight.
"},{"location":"tutorials/basic_auth/","title":"Basic Auth","text":""},{"location":"tutorials/basic_auth/#quick-reference","title":"Quick reference","text":"FireFly has a pluggable auth system which can be enabled at two different layers of the stack. At the top, auth can be enabled at the HTTP listener level. This will protect all requests to the given listener. FireFly has three different HTTP listeners, which could each use a different auth scheme:
Auth can also be enabled at the namespace level within FireFly as well. This enables several different use cases. For example, you might have two different teams that want to use the same FireFly node, each with different sets of authorized users. You could configure them to use separate namespaces and create separate auth schemes on each.
FireFly has a basic auth plugin built in, which we will be configuring in this tutorial.
NOTE: This guide assumes that you have already gone through the Getting Started Guide and have set up and run a stack at least once.
"},{"location":"tutorials/basic_auth/#additional-info","title":"Additional info","text":"FireFly's built in basic auth plugin uses a password hash file to store the list of authorized users. FireFly uses the bcrypt algorithm to compare passwords against the stored hash. You can use htpasswd on a command line to generate a hash file.
test_users password hash file","text":"touch test_users\n"},{"location":"tutorials/basic_auth/#create-a-user-named-firefly","title":"Create a user named firefly","text":"htpasswd -B test_users firefly\n You will be prompted to type the password for the new user twice. Optional: You can continue to add new users by running this command with a different username.
htpasswd -B test_users <username>\n"},{"location":"tutorials/basic_auth/#enable-basic-auth-at-the-namespace-level","title":"Enable basic auth at the Namespace level","text":"To enable auth at the HTTP listener level we will need to edit the FireFly core config file. You can find the config file for the first node in your stack at the following path:
~/.firefly/stacks/<stack_name>/runtime/config/firefly_core_0.yml\n Open the config file in your favorite editor and add the auth section to the plugins list:
plugins:\n auth:\n - name: test_user_auth\n type: basic\n basic:\n passwordfile: /etc/firefly/test_users\n You will also need to add test_user_auth to the list of plugins used by the default namespace:
namespaces:\n predefined:\n - plugins:\n - database0\n - blockchain0\n - dataexchange0\n - sharedstorage0\n - erc20_erc721\n - test_user_auth\n"},{"location":"tutorials/basic_auth/#mount-the-password-hash-file-in-the-docker-container","title":"Mount the password hash file in the Docker container","text":"If you set up your FireFly stack using the FireFly CLI we will need to mount the password hash file in the Docker container, so that FireFly can actually read the file. This can be done by editing the docker-compose.override.yml file at:
~/.firefly/stacks/<stack_name>/docker-compose.override.yml\n Edit the file to look like this, replacing the path to your test_users file:
# Add custom config overrides here\n# See https://docs.docker.com/compose/extends\nversion: \"2.1\"\nservices:\n firefly_core_0:\n volumes:\n - PATH_TO_YOUR_TEST_USERS_FILE:/etc/firefly/test_users\n"},{"location":"tutorials/basic_auth/#restart-your-firefly-core-container","title":"Restart your FireFly Core container","text":"To restart your FireFly stack and have Docker pick up the new volume, run:
ff stop <stack_name>\nff start <stack_name>\n NOTE: The FireFly basic auth plugin reads this file at startup and will not read it again during runtime. If you add any users or change passwords, restarting the node will be necessary to use an updated file.
"},{"location":"tutorials/basic_auth/#test-basic-auth","title":"Test basic auth","text":"After FireFly starts back up, you should be able to test that auth is working correctly by making an unauthenticated request to the API:
curl http://localhost:5000/api/v1/status\n{\"error\":\"FF00169: Unauthorized\"}\n However, if we add the username and password that we created above, the request should still work:
curl -u \"firefly:firefly\" http://localhost:5000/api/v1/status\n{\"namespace\":{\"name\":\"default\",\"networkName\":\"default\",\"description\":\"Default predefined namespace\",\"created\":\"2022-10-18T16:35:57.603205507Z\"},\"node\":{\"name\":\"node_0\",\"registered\":false},\"org\":{\"name\":\"org_0\",\"registered\":false},\"plugins\":{\"blockchain\":[{\"name\":\"blockchain0\",\"pluginType\":\"ethereum\"}],\"database\":[{\"name\":\"database0\",\"pluginType\":\"sqlite3\"}],\"dataExchange\":[{\"name\":\"dataexchange0\",\"pluginType\":\"ffdx\"}],\"events\":[{\"pluginType\":\"websockets\"},{\"pluginType\":\"webhooks\"},{\"pluginType\":\"system\"}],\"identity\":[],\"sharedStorage\":[{\"name\":\"sharedstorage0\",\"pluginType\":\"ipfs\"}],\"tokens\":[{\"name\":\"erc20_erc721\",\"pluginType\":\"fftokens\"}]},\"multiparty\":{\"enabled\":true,\"contract\":{\"active\":{\"index\":0,\"location\":{\"address\":\"0xa750e2647e24828f4fec2e6e6d61fc08ccca5efa\"},\"info\":{\"subscription\":\"sb-d0642f14-f89a-41bb-6fd4-ae74b9501b6c\",\"version\":2}}}}}\n"},{"location":"tutorials/basic_auth/#enable-auth-at-the-http-listener-level","title":"Enable auth at the HTTP listener level","text":"You may also want to enable auth at the HTTP listener level, for instance on the SPI (Service Provider Interface) to limit administrative actions. To enable auth at the HTTP listener level we will need to edit the FireFly core config file. You can find the config file for the first node in your stack at the following path:
~/.firefly/stacks/<stack_name>/runtime/config/firefly_core_0.yml\n Open the config file in your favorite editor and change the spi section to look like the following:
spi:\n address: 0.0.0.0\n enabled: true\n port: 5101\n publicURL: http://127.0.0.1:5101\n auth:\n type: basic\n basic:\n passwordfile: /etc/firefly/test_users\n"},{"location":"tutorials/basic_auth/#restart-firefly-to-apply-the-changes","title":"Restart FireFly to apply the changes","text":"NOTE You will need to mount the password hash file following the instructions above if you have not already.
You can run the following to restart your stack:
ff stop <stack_name>\nff start <stack_name>\n"},{"location":"tutorials/basic_auth/#test-basic-auth_1","title":"Test basic auth","text":"After FireFly starts back up, you should be able to query the SPI and the request should be unauthorized.
curl http://127.0.0.1:5101/spi/v1/namespaces\n{\"error\":\"FF00169: Unauthorized\"}\n Adding the username and password that we set earlier, should make the request succeed.
curl -u \"firefly:firefly\" http://127.0.0.1:5101/spi/v1/namespaces\n[{\"name\":\"default\",\"networkName\":\"default\",\"description\":\"Default predefined namespace\",\"created\":\"2022-10-18T16:35:57.603205507Z\"}]\n"},{"location":"tutorials/broadcast_data/","title":"Broadcast data","text":""},{"location":"tutorials/broadcast_data/#quick-reference","title":"Quick reference","text":"message visible to all parties in the networkmessage has one or more attached pieces of business datadatatypebatch can pin hundreds of message broadcastsPOST /api/v1/namespaces/default/messages/broadcast
{\n \"data\": [\n {\n \"value\": \"a string\"\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#example-message-response","title":"Example message response","text":"{\n \"header\": {\n \"id\": \"607e22ad-04fa-434a-a073-54f528ca14fb\", // uniquely identifies this broadcast message\n \"type\": \"broadcast\", // set automatically\n \"txtype\": \"batch_pin\", // message will be batched, and sequenced via the blockchain\n \"author\": \"0x0a65365587a65ce44938eab5a765fe8bc6532bdf\", // set automatically in this example to the node org\n \"created\": \"2021-07-01T18:06:24.5817016Z\", // set automatically\n \"namespace\": \"default\", // the 'default' namespace was set in the URL\n \"topics\": [\n \"default\" // the default topic that the message is published on, if no topic is set\n ],\n // datahash is calculated from the data array below\n \"datahash\": \"5a7bbc074441fa3231d9c8fc942d68ef9b9b646dd234bb48c57826dc723b26fd\"\n },\n \"hash\": \"81acf8c8f7982dbc49258535561461601cbe769752fecec0f8ce0358664979e6\", // hash of the header\n \"state\": \"ready\", // this message is stored locally but not yet confirmed\n \"data\": [\n // one item of data was stored\n {\n \"id\": \"8d8635e2-7c90-4963-99cc-794c98a68b1d\", // can be used to query the data in the future\n \"hash\": \"c95d6352f524a770a787c16509237baf7eb59967699fb9a6d825270e7ec0eacf\" // sha256 hash of `\"a string\"`\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#example-2-inline-object-data-to-a-topic-no-datatype-verification","title":"Example 2: Inline object data to a topic (no datatype verification)","text":"It is very good practice to set a tag and topic in each of your messages:
tag should tell the apps receiving the broadcast (including the local app), what to do when it receives the message. Its the reason for the broadcast - an application specific type for the message.topic should be something like a well known identifier that relates to the information you are publishing. It is used as an ordering context, so all broadcasts on a given topic are assured to be processed in order.POST /api/v1/namespaces/default/messages/broadcast
{\n \"header\": {\n \"tag\": \"new_widget_created\",\n \"topics\": [\"widget_id_12345\"]\n },\n \"data\": [\n {\n \"value\": {\n \"id\": \"widget_id_12345\",\n \"name\": \"superwidget\"\n }\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#notes-on-why-setting-a-topic-is-important","title":"Notes on why setting a topic is important","text":"The FireFly aggregator uses the topic (obfuscated on chain) to determine if a message is the next message in an in-flight sequence for any groups the node is involved in. If it is, then that message must receive all off-chain private data and be confirmed before any subsequent messages can be confirmed on the same sequence.
So if you use the same topic in every message, then a single failed send on one topic blocks delivery of all messages between those parties, until the missing data arrives.
Instead it is best practice to set the topic on your messages to a value that identifies an ordered stream of business processing. Some examples:
The topic field is an array, because there are cases (such as merging two identifiers) where you need a message to be deterministically ordered across multiple sequences. However, this is an advanced use case and you are likely to set a single topic on the vast majority of your messages.
Here we make two API calls.
Create the data object explicitly, using a multi-part form upload
You can also just post JSON to this endpoint
Broadcast a message referring to that data
The Blob attachment gets published to shared storage
Example curl command (Linux/Mac) to grab an image from the internet, and pipe it into a multi-part form post to FireFly.
Note we use autometa to cause FireFly to automatically add the filename, and size, to the JSON part of the data object for us.
curl -sLo - https://github.com/hyperledger/firefly/raw/main/docs/firefly_logo.png \\\n | curl --form autometa=true --form file=@- \\\n http://localhost:5000/api/v1/namespaces/default/data\n"},{"location":"tutorials/broadcast_data/#example-data-response-from-blob-upload","title":"Example data response from Blob upload","text":"Status: 200 OK - your data is uploaded to your local FireFly node
At this point the data has not be shared with anyone else in the network
{\n // A uniquely generated ID, we can refer to when sending this data to other parties\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\",\n \"validator\": \"json\", // the \"value\" part is JSON\n \"namespace\": \"default\", // from the URL\n // The hash is a combination of the hash of the \"value\" metadata, and the\n // hash of the blob\n \"hash\": \"997af6a9a19f06cc8a46872617b8bf974b106f744b2e407e94cc6959aa8cf0b8\",\n \"created\": \"2021-07-01T20:20:35.5462306Z\",\n \"value\": {\n \"filename\": \"-\", // dash is how curl represents the filename for stdin\n \"size\": 31185 // the size of the blob data\n },\n \"blob\": {\n // A hash reference to the blob\n \"hash\": \"86e6b39b04b605dd1b03f70932976775962509d29ae1ad2628e684faabe48136\"\n // Note at this point there is no public reference. The only place\n // this data has been uploaded to is our own private data exchange.\n // It's ready to be published to everyone (broadcast), or privately\n // transferred (send) to other parties in the network. But that hasn't\n // happened yet.\n }\n}\n"},{"location":"tutorials/broadcast_data/#broadcast-the-uploaded-data","title":"Broadcast the uploaded data","text":"Just include a reference to the id returned from the upload.
POST /api/v1/namespaces/default/messages/broadcast
{\n \"data\": [\n {\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\"\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#broadcasting-messages-using-the-sandbox","title":"Broadcasting Messages using the Sandbox","text":"All of the functionality discussed above can be done through the FireFly Sandbox.
To get started, open up the Web UI and Sanbox UI for at least one of your members. The URLs for these were printed in your terminal when you started your FireFly stack.
In the sandbox, enter your message into the message field as seen in the screenshot below.
Notice how the data field in the center panel updates in real time.
Click the blue Run button. This should return a 202 response immediately in the Server Response section and will populate the right hand panel with transaction information after a few seconds.
Go back to the FireFly UI (the URL for this would have been shown in the terminal when you started the stack) and you'll see your successful blockchain transaction
"},{"location":"tutorials/create_custom_identity/","title":"Create a Custom Identity","text":""},{"location":"tutorials/create_custom_identity/#quick-reference","title":"Quick reference","text":"Out of the box, a FireFly Supernode contains both an org and a node identity. Your use case might demand more granular notions of identity (ex. customers, clients, etc.). Instead of creating a Supernode for each identity, you can create multiple custom identities within a FireFly Supernode.
If you haven't started a FireFly stack already, please go to the Getting Started guide on how to Start your environment
\u2190 \u2461 Start your environment
"},{"location":"tutorials/create_custom_identity/#step-1-create-a-new-account","title":"Step 1: Create a new account","text":"The FireFly CLI has a helpful command to create an account in a local development environment for you.
NOTE: In a production environment, key management actions such as creation, encryption, unlocking, etc. may be very different, depending on what type of blockchain node and signer your specific deployment is using.
To create a new account on your local stack, run:
ff accounts create <stack_name>\n {\n \"address\": \"0xc00109e112e21165c7065da776c75cfbc9cdc5e7\",\n \"privateKey\": \"...\"\n}\n The FireFly CLI has created a new private key and address for us to be able to use, and it has loaded the encrypted private key into the signing container. However, we haven't told FireFly itself about the new key, or who it belongs to. That's what we'll do in the next steps.
"},{"location":"tutorials/create_custom_identity/#step-2-query-the-parent-org-for-its-uuid","title":"Step 2: Query the parent org for its UUID","text":"If we want to create a new custom identity under the organizational identity that we're using in a multiparty network, first we will need to look up the UUID for our org identity. We can look that up by making a GET request to the status endpoint on the default namespace.
GET http://localhost:5000/api/v1/status
{\n \"namespace\": {...},\n \"node\": {...},\n \"org\": {\n \"name\": \"org_0\",\n \"registered\": true,\n \"did\": \"did:firefly:org/org_0\",\n \"id\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\", // We need this in Step 3\n \"verifiers\": [\n {\n \"type\": \"ethereum_address\",\n \"value\": \"0xd7320c76a2efc1909196dea876c4c7dabe49c0f4\"\n }\n ]\n },\n \"plugins\": {...},\n \"multiparty\": {...}\n}\n"},{"location":"tutorials/create_custom_identity/#step-3-register-the-new-custom-identity-with-firefly","title":"Step 3: Register the new custom identity with FireFly","text":"Now we can POST to the identities endpoint to create a new custom identity. We will include the UUID of the organizational identity from the previous step in the \"parent\" field in the request.
POST http://localhost:5000/api/v1/identities
{\n \"name\": \"myCustomIdentity\",\n \"key\": \"0xc00109e112e21165c7065da776c75cfbc9cdc5e7\", // Signing Key from Step 1\n \"parent\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\" // Org UUID from Step 2\n}\n"},{"location":"tutorials/create_custom_identity/#response_1","title":"Response","text":"{\n \"id\": \"5ea8f770-e004-48b5-af60-01994230ed05\",\n \"did\": \"did:firefly:myCustomIdentity\",\n \"type\": \"custom\",\n \"parent\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\",\n \"namespace\": \"\",\n \"name\": \"myCustomIdentity\",\n \"messages\": {\n \"claim\": \"817b7c79-a934-4936-bbb1-7dcc7c76c1f4\",\n \"verification\": \"ae55f998-49b1-4391-bed2-fa5e86dc85a2\",\n \"update\": null\n }\n}\n"},{"location":"tutorials/create_custom_identity/#step-4-query-the-new-custom-identity","title":"Step 4: Query the New Custom Identity","text":"Lastly, if we want to confirm that the new identity has been created, we can query the identities endpoint to see our new custom identity.
"},{"location":"tutorials/create_custom_identity/#request_2","title":"Request","text":"GET http://localhost:5000/api/v1/identities?fetchverifiers=true
NOTE: Using fetchverifiers=true will return the cryptographic verification mechanism for the FireFly identity.
[\n {\n \"id\": \"5ea8f770-e004-48b5-af60-01994230ed05\",\n \"did\": \"did:firefly:myCustomIdentity\",\n \"type\": \"custom\",\n \"parent\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\",\n \"namespace\": \"default\",\n \"name\": \"myCustomIdentity\",\n \"messages\": {\n \"claim\": \"817b7c79-a934-4936-bbb1-7dcc7c76c1f4\",\n \"verification\": \"ae55f998-49b1-4391-bed2-fa5e86dc85a2\",\n \"update\": null\n },\n \"created\": \"2022-09-19T18:10:47.365068013Z\",\n \"updated\": \"2022-09-19T18:10:47.365068013Z\",\n \"verifiers\": [\n {\n \"type\": \"ethereum_address\",\n \"value\": \"0xfe1ea8c8a065a0cda424e2351707c7e8eb4d2b6f\"\n }\n ]\n },\n { ... },\n { ... }\n]\n"},{"location":"tutorials/define_datatype/","title":"Define a datatype","text":""},{"location":"tutorials/define_datatype/#quick-reference","title":"Quick reference","text":"As your use case matures, it is important to agree formal datatypes between the parties. These canonical datatypes need to be defined and versioned, so that each member can extract and transform data from their internal systems into this datatype.
Datatypes are broadcast to the network so everybody refers to the same JSON schema when validating their data. The broadcast must complete before a datatype can be used by an application to upload/broadcast/send data. The same system of broadcast within FireFly is used to broadcast definitions of datatypes, as is used to broadcast the data itself.
"},{"location":"tutorials/define_datatype/#additional-info","title":"Additional info","text":"POST /api/v1/namespaces/{ns}/datatypes
{\n \"name\": \"widget\",\n \"version\": \"0.0.2\",\n \"value\": {\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n }\n }\n}\n"},{"location":"tutorials/define_datatype/#example-message-response","title":"Example message response","text":"Status: 202 Accepted - a broadcast message has been sent, and on confirmation the new datatype will be created (unless it conflicts with another definition with the same name and version that was ordered onto the blockchain before this definition).
{\n \"header\": {\n \"id\": \"727f7d3a-d07e-4e80-95af-59f8d2ac7531\", // this is the ID of the message, not the data type\n \"type\": \"definition\", // a special type for system broadcasts\n \"txtype\": \"batch_pin\", // the broadcast is pinned to the chain\n \"author\": \"0x0a65365587a65ce44938eab5a765fe8bc6532bdf\", // the local identity\n \"created\": \"2021-07-01T21:06:26.9997478Z\", // the time the broadcast was sent\n \"namespace\": \"ff_system\", // the data/message broadcast happens on the system namespace\n \"topic\": [\n \"ff_ns_default\" // the namespace itself is used in the topic\n ],\n \"tag\": \"ff_define_datatype\", // a tag instructing FireFly to process this as a datatype definition\n \"datahash\": \"56bd677e3e070ba62f547237edd7a90df5deaaf1a42e7d6435ec66a587c14370\"\n },\n \"hash\": \"5b6593720243831ba9e4ad002c550e95c63704b2c9dbdf31135d7d9207f8cae8\",\n \"state\": \"ready\", // this message is stored locally but not yet confirmed\n \"data\": [\n {\n \"id\": \"7539a0ab-78d8-4d42-b283-7e316b3afed3\", // this data object in the ff_system namespace, contains the schema\n \"hash\": \"22ba1cdf84f2a4aaffac665c83ff27c5431c0004dc72a9bf031ae35a75ac5aef\"\n }\n ]\n}\n"},{"location":"tutorials/define_datatype/#lookup-the-confirmed-data-type","title":"Lookup the confirmed data type","text":"GET /api/v1/namespaces/default/datatypes?name=widget&version=0.0.2
[\n {\n \"id\": \"421c94b1-66ce-4ba0-9794-7e03c63df29d\", // an ID allocated to the datatype\n \"message\": \"727f7d3a-d07e-4e80-95af-59f8d2ac7531\", // the message that broadcast this data type\n \"validator\": \"json\", // the type of validator that this datatype can be used for (this one is JSON Schema)\n \"namespace\": \"default\", // the namespace of the datatype\n \"name\": \"widget\", // the name of the datatype\n \"version\": \"0.0.2\", // the version of the data type\n \"hash\": \"a4dceb79a21937ca5ea9fa22419011ca937b4b8bc563d690cea3114af9abce2c\", // hash of the schema itself\n \"created\": \"2021-07-01T21:06:26.983986Z\", // time it was confirmed\n \"value\": {\n // the JSON schema itself\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n }\n }\n }\n]\n"},{"location":"tutorials/define_datatype/#example-private-send-referring-to-the-datatype","title":"Example private send referring to the datatype","text":"Once confirmed, a piece of data can be assigned that datatype and all FireFly nodes will verify it against the schema. On a sending node, the data will be rejected at upload/send time if it does not conform. On other nodes, bad data results in a message_rejected event (rather than message_confirmed) for any message that arrives referring to that data.
POST /api/v1/namespaces/default/send/message
{\n \"header\": {\n \"tag\": \"new_widget_created\",\n \"topic\": [\"widget_id_12345\"]\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"datatype\": {\n \"name\": \"widget\",\n \"version\": \"0.0.2\"\n },\n \"value\": {\n \"id\": \"widget_id_12345\",\n \"name\": \"superwidget\"\n }\n }\n ]\n}\n"},{"location":"tutorials/define_datatype/#defining-datatypes-using-the-sandbox","title":"Defining Datatypes using the Sandbox","text":"You can also define a datatype through the FireFly Sandbox.
To get started, open up the Web UI and Sanbox UI for at least one of your members. The URLs for these were printed in your terminal when you started your FireFly stack.
In the sandbox, enter the datatype's name, version, and JSON Schema as seen in the screenshot below.
{\n \"name\": \"widget\",\n \"version\": \"0.0.2\",\n \"value\": {\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n }\n }\n}\n Notice how the data field in the center panel updates in real time.
Click the blue Run button. This should return a 202 response immediately in the Server Response section and will populate the right hand panel with transaction information after a few seconds.
Go back to the FireFly UI (the URL for this would have been shown in the terminal when you started the stack) and you'll see that you've successfully defined your datatype
"},{"location":"tutorials/events/","title":"Listen for events","text":""},{"location":"tutorials/events/#quick-reference","title":"Quick reference","text":"Probably the most important aspect of FireFly is that it is an event-driven programming model.
Parties interact by sending messages and transactions to each other, on and off chain. Once aggregated and confirmed those events drive processing in the other party.
This allows orchestration of complex multi-party system applications and business processes.
FireFly provides each party with their own private history, that includes all exchanges outbound and inbound performed through the node into the multi-party system. That includes blockchain backed transactions, as well as completely off-chain message exchanges.
The event transports are pluggable. The core transports are WebSockets and Webhooks. We focus on WebSockets in this getting started guide.
Check out the Request/Reply section for more information on Webhooks
"},{"location":"tutorials/events/#additional-info","title":"Additional info","text":"The simplest way to get started consuming events, is with an ephemeral WebSocket listener.
Example connection URL:
ws://localhost:5000/ws?namespace=default&ephemeral&autoack&filter.events=message_confirmed
namespace=default - event listeners are scoped to a namespaceephemeral - listen for events that occur while this connection is active, but do not remember the app instance (great for UIs)autoack- automatically acknowledge each event, so the next event is sent (great for UIs)filter.events=message_confirmed - only listen for events resulting from a message confirmationThere are a number of browser extensions that let you experiment with WebSockets:
"},{"location":"tutorials/events/#example-event-payload","title":"Example event payload","text":"The events (by default) do not contain the payload data, just the event and referred message. This means the WebSocket payloads are a predictably small size, and the application can use the information in the message to post-filter the event to decide if it needs to download the full data.
There are server-side filters provided on events as well
{\n \"id\": \"8f0da4d7-8af7-48da-912d-187979bf60ed\",\n \"sequence\": 61,\n \"type\": \"message_confirmed\",\n \"namespace\": \"default\",\n \"reference\": \"9710a350-0ba1-43c6-90fc-352131ce818a\",\n \"created\": \"2021-07-02T04:37:47.6556589Z\",\n \"subscription\": {\n \"id\": \"2426c5b1-ffa9-4f7d-affb-e4e541945808\",\n \"namespace\": \"default\",\n \"name\": \"2426c5b1-ffa9-4f7d-affb-e4e541945808\"\n },\n \"message\": {\n \"header\": {\n \"id\": \"9710a350-0ba1-43c6-90fc-352131ce818a\",\n \"type\": \"broadcast\",\n \"txtype\": \"batch_pin\",\n \"author\": \"0x1d14b65d2dd5c13f6cb6d3dc4aa13c795a8f3b28\",\n \"created\": \"2021-07-02T04:37:40.1257944Z\",\n \"namespace\": \"default\",\n \"topic\": [\"default\"],\n \"datahash\": \"cd6a09a15ccd3e6ed1d67d69fa4773b563f27f17f3eaad611a2792ba945ca34f\"\n },\n \"hash\": \"1b6808d2b95b418e54e7bd34593bfa36a002b841ac42f89d00586dac61e8df43\",\n \"batchID\": \"16ffc02c-8cb0-4e2f-8b58-a707ad1d1eae\",\n \"state\": \"confirmed\",\n \"confirmed\": \"2021-07-02T04:37:47.6548399Z\",\n \"data\": [\n {\n \"id\": \"b3a814cc-17d1-45d5-975e-90279ed2c3fc\",\n \"hash\": \"9ddefe4435b21d901439e546d54a14a175a3493b9fd8fbf38d9ea6d3cbf70826\"\n }\n ]\n }\n}\n"},{"location":"tutorials/events/#download-the-message-and-data","title":"Download the message and data","text":"A simple REST API is provided to allow you to download the data associated with the message:
GET /api/v1/namespaces/default/messages/{id}?data=true
As you already have the message object in the event delivery, you can query just the array of data objects as follows:
GET /api/v1/namespaces/default/messages/{id}/data
To reliably process messages within your application, you should first set up a subscription.
A subscription requests that:
This should be combined with manual acknowledgment of the events, where the application sends a payload such as the following in response to each event it receives (where the id comes from the event it received):
{ \"type\": \"ack\", \"id\": \"617db63-2cf5-4fa3-8320-46150cbb5372\" }\n You must send an acknowledgement for every message, or you will stop receiving messages.
"},{"location":"tutorials/events/#set-up-the-websocket-subscription","title":"Set up the WebSocket subscription","text":"Each subscription is scoped to a namespace, and must have a name. You can then choose to perform server-side filtering on the events using regular expressions matched against the information in the event.
POST /namespaces/default/subscriptions
{\n \"transport\": \"websockets\",\n \"name\": \"app1\",\n \"filter\": {\n \"blockchainevent\": {\n \"listener\": \".*\",\n \"name\": \".*\"\n },\n \"events\": \".*\",\n \"message\": {\n \"author\": \".*\",\n \"group\": \".*\",\n \"tag\": \".*\",\n \"topics\": \".*\"\n },\n \"transaction\": {\n \"type\": \".*\"\n }\n },\n \"options\": {\n \"firstEvent\": \"newest\",\n \"readAhead\": 50\n }\n}\n"},{"location":"tutorials/events/#connect-to-consume-messages","title":"Connect to consume messages","text":"Example connection URL:
ws://localhost:5000/ws?namespace=default&name=app1
namespace=default - event listeners are scoped to a namespacename=app1 - the subscription nameIf you are interested in learning more about events for custom smart contracts, please see the Working with custom smart contracts section.
"},{"location":"tutorials/private_send/","title":"Privately send data","text":""},{"location":"tutorials/private_send/#quick-reference","title":"Quick reference","text":"message to a restricted set of partiesmessage has one or more attached pieces of business datadatatypegroup specifies who has visibility to the datamessage_confirmed event occursbatch can pin hundreds of private message sendsmessage_confirmed event immediatelymessage_confirmed events as soon as the data arrivesPOST /api/v1/namespaces/default/messages/private
{\n \"data\": [\n {\n \"value\": \"a string\"\n }\n ],\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n }\n}\n"},{"location":"tutorials/private_send/#example-message-response","title":"Example message response","text":"Status: 202 Accepted - the message is on it's way, but has not yet been confirmed.
{\n \"header\": {\n \"id\": \"c387e9d2-bdac-44cc-9dd5-5e7f0b6b0e58\", // uniquely identifies this private message\n \"type\": \"private\", // set automatically\n \"txtype\": \"batch_pin\", // message will be batched, and sequenced via the blockchain\n \"author\": \"0x0a65365587a65ce44938eab5a765fe8bc6532bdf\", // set automatically in this example to the node org\n \"created\": \"2021-07-02T02:37:13.4642085Z\", // set automatically\n \"namespace\": \"default\", // the 'default' namespace was set in the URL\n // The group hash is calculated from the resolved list of group participants.\n // The first time a group is used, the participant list is sent privately along with the\n // batch of messages in a `groupinit` message.\n \"group\": \"2aa5297b5eed0c3a612a667c727ca38b54fb3b5cc245ebac4c2c7abe490bdf6c\",\n \"topics\": [\n \"default\" // the default topic that the message is published on, if no topic is set\n ],\n // datahash is calculated from the data array below\n \"datahash\": \"24b2d583b87eda952fa00e02c6de4f78110df63218eddf568f0240be3d02c866\"\n },\n \"hash\": \"423ad7d99fd30ff679270ad2b6b35cdd85d48db30bafb71464ca1527ce114a60\", // hash of the header\n \"state\": \"ready\", // this message is stored locally but not yet confirmed\n \"data\": [\n // one item of data was stored\n {\n \"id\": \"8d8635e2-7c90-4963-99cc-794c98a68b1d\", // can be used to query the data in the future\n \"hash\": \"c95d6352f524a770a787c16509237baf7eb59967699fb9a6d825270e7ec0eacf\" // sha256 hash of `\"a string\"`\n }\n ]\n}\n"},{"location":"tutorials/private_send/#example-2-unpinned-private-send-of-in-line-string-data","title":"Example 2: Unpinned private send of in-line string data","text":"Set header.txtype: \"none\" to disable pinning of the private message send to the blockchain. The message is sent immediately (no batching) over the private data exchange.
POST /api/v1/namespaces/default/messages/private
{\n \"header\": {\n \"txtype\": \"none\"\n },\n \"data\": [\n {\n \"value\": \"a string\"\n }\n ],\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n }\n}\n"},{"location":"tutorials/private_send/#example-3-inline-object-data-to-a-topic-no-datatype-verification","title":"Example 3: Inline object data to a topic (no datatype verification)","text":"It is very good practice to set a tag and topic in each of your messages:
tag should tell the apps receiving the private send (including the local app), what to do when it receives the message. Its the reason for the send - an application specific type for the message.topic should be something like a well known identifier that relates to the information you are publishing. It is used as an ordering context, so all sends on a given topic are assured to be processed in order.POST /api/v1/namespaces/default/messages/private
{\n \"header\": {\n \"tag\": \"new_widget_created\",\n \"topics\": [\"widget_id_12345\"]\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": {\n \"id\": \"widget_id_12345\",\n \"name\": \"superwidget\"\n }\n }\n ]\n}\n"},{"location":"tutorials/private_send/#notes-on-why-setting-a-topic-is-important","title":"Notes on why setting a topic is important","text":"The FireFly aggregator uses the topic (obfuscated on chain) to determine if a message is the next message in an in-flight sequence for any groups the node is involved in. If it is, then that message must receive all off-chain private data and be confirmed before any subsequent messages can be confirmed on the same sequence.
So if you use the same topic in every message, then a single failed send on one topic blocks delivery of all messages between those parties, until the missing data arrives.
Instead it is best practice to set the topic on your messages to value that identifies an ordered stream of business processing. Some examples:
The topic field is an array, because there are cases (such as merging two identifiers) where you need a message to be deterministically ordered across multiple sequences. However, this is an advanced use case and you are likely to set a single topic on the vast majority of your messages.
Here we make two API calls.
Create the data object explicitly, using a multi-part form upload
You can also just post JSON to this endpoint
Privately send a message referring to that data
The Blob is sent privately to each party
Example curl command (Linux/Mac) to grab an image from the internet, and pipe it into a multi-part form post to FireFly.
Note we use autometa to cause FireFly to automatically add the filename, and size, to the JSON part of the data object for us.
curl -sLo - https://github.com/hyperledger/firefly/raw/main/docs/firefly_logo.png \\\n | curl --form autometa=true --form file=@- \\\n http://localhost:5000/api/v1/namespaces/default/data\n"},{"location":"tutorials/private_send/#example-data-response-from-blob-upload","title":"Example data response from Blob upload","text":"Status: 200 OK - your data is uploaded to your local FireFly node
At this point the data has not be shared with anyone else in the network
{\n // A uniquely generated ID, we can refer to when sending this data to other parties\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\",\n \"validator\": \"json\", // the \"value\" part is JSON\n \"namespace\": \"default\", // from the URL\n // The hash is a combination of the hash of the \"value\" metadata, and the\n // hash of the blob\n \"hash\": \"997af6a9a19f06cc8a46872617b8bf974b106f744b2e407e94cc6959aa8cf0b8\",\n \"created\": \"2021-07-01T20:20:35.5462306Z\",\n \"value\": {\n \"filename\": \"-\", // dash is how curl represents the filename for stdin\n \"size\": 31185 // the size of the blob data\n },\n \"blob\": {\n // A hash reference to the blob\n \"hash\": \"86e6b39b04b605dd1b03f70932976775962509d29ae1ad2628e684faabe48136\"\n }\n}\n"},{"location":"tutorials/private_send/#send-the-uploaded-data-privately","title":"Send the uploaded data privately","text":"Just include a reference to the id returned from the upload.
POST /api/v1/namespaces/default/messages/private
{\n \"data\": [\n {\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\"\n }\n ],\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n }\n}\n"},{"location":"tutorials/private_send/#sending-private-messages-using-the-sandbox","title":"Sending Private Messages using the Sandbox","text":"All of the functionality discussed above can be done through the FireFly Sandbox.
To get started, open up the Web UI and Sanbox UI for at least one of your members. The URLs for these were printed in your terminal when you started your FireFly stack.
Make sure to expand the \"Send a Private Message\" section. Enter your message into the message field as seen in the screenshot below. Because we are sending a private message, make sure you're in the \"Send a Private Message\" section and that you choose a message recipient
Notice how the data field in the center panel updates in real time as you update the message you wish to send.
Click the blue Run button. This should return a 202 response immediately in the Server Response section and will populate the right hand panel with transaction information after a few seconds.
Go back to the FireFly UI (the URL for this would have been shown in the terminal when you started the stack) and you'll see your successful blockchain transaction. Compare the \"Recent Network Changes\" widget With private messages, your
"},{"location":"tutorials/query_messages/","title":"Explore messages","text":""},{"location":"tutorials/query_messages/#quick-reference","title":"Quick reference","text":"The FireFly Explorer is a great way to view the messages sent and received by your node.
Just open /ui on your FireFly node to access it.
This builds on the APIs to query and filter messages, described below
"},{"location":"tutorials/query_messages/#additional-info","title":"Additional info","text":"These are the messages ready to be processed in your application. All data associated with the message (including Blob attachments) is available, and if they are sequenced by the blockchain, then those blockchain transactions are complete.
The order in which you process messages should be determined by absolute order of message_confirmed events - queryable via the events collection, or through event listeners (discussed next in the getting started guide).
That is because messages are ordered by timestamp, which is potentially subject to adjustments of the clock. Whereas events are ordered by the insertion order into the database, and as such changes in the clock do not affect the order.
GET /api/v1/namespaces/{ns}/messages?pending=false&limit=100
[\n {\n \"header\": {\n \"id\": \"423302bb-abfc-4d64-892d-38b2fdfe1549\",\n \"type\": \"private\", // this was a private send\n \"txtype\": \"batch_pin\", // pinned in a batch to the blockchain\n \"author\": \"0x1d14b65d2dd5c13f6cb6d3dc4aa13c795a8f3b28\",\n \"created\": \"2021-07-02T03:09:40.2606238Z\",\n \"namespace\": \"default\",\n \"group\": \"2aa5297b5eed0c3a612a667c727ca38b54fb3b5cc245ebac4c2c7abe490bdf6c\", // sent to this group\n \"topic\": [\"widget_id_12345\"],\n \"tag\": \"new_widget_created\",\n \"datahash\": \"551dd261e80ce76b1908c031cff8a707bd76376d6eddfdc1040c2ed6481ec8dd\"\n },\n \"hash\": \"bf2ca94db8c31bae3cae974bb626fa822c6eee5f572d274d72281e72537b30b3\",\n \"batch\": \"f7ac773d-885a-4d73-ac6b-c09f5346a051\", // the batch ID that pinned this message to the chain\n \"state\": \"confirmed\", // message is now confirmed\n \"confirmed\": \"2021-07-02T03:09:49.9207211Z\", // timestamp when this node confirmed the message\n \"data\": [\n {\n \"id\": \"914eed77-8789-451c-b55f-ba9570a71eba\",\n \"hash\": \"9541cabc750c692e553a421a6c5c07ebcae820774d2d8d0b88fac2a231c10bf2\"\n }\n ],\n \"pins\": [\n // A \"pin\" is an identifier that is used by FireFly for sequencing messages.\n //\n // For private messages, it is an obfuscated representation of the sequence of this message,\n // on a topic, within this group, from this sender. There will be one pin per topic. You will find these\n // pins in the blockchain transaction, as well as the off-chain data.\n // Each one is unqiue, and without the group hash, very difficult to correlate - meaning\n // the data on-chain provides a high level of privacy.\n //\n // Note for broadcast (which does not require obfuscation), it is simply a hash of the topic.\n // So you will see the same pin for all messages on the same topic.\n \"ee56de6241522ab0ad8266faebf2c0f1dc11be7bd0c41d847998135b45685b77\"\n ]\n }\n]\n"},{"location":"tutorials/query_messages/#example-2-query-all-messages","title":"Example 2: Query all messages","text":"The natural sort order the API will return for messages is:
created timestamp orderconfirmed timestamp orderGET /api/v1/namespaces/{ns}/messages
At some point you may need to rotate certificates on your Data Exchange nodes. FireFly provides an API to update a node identity, but there are a few prerequisite steps to load a new certificate on the Data Exchange node itself. This guide will walk you through that process. For more information on different types of identities in FireFly, please see the Reference page on Identities.
NOTE: This guide assumes that you are working in a local development environment that was set up with the Getting Started Guide. For a production deployment, the exact process to accomplish each step may be different. For example, you may generate your certs with a CA, or in some other manner. But the high level steps remain the same.
The high level steps to the process (described in detail below) are:
peer-certs directoryPATCH the node identity using the FireFly APITo generate a new cert, we're going to use a self signed certificate generated by openssl. This is how the FireFly CLI generated the original cert that was used when it created your stack.
For the first member of a FireFly stack you run:
openssl req -new -x509 -nodes -days 365 -subj /CN=dataexchange_0/O=member_0 -keyout key.pem -out cert.pem\n For the second member:
openssl req -new -x509 -nodes -days 365 -subj /CN=dataexchange_1/O=member_1 -keyout key.pem -out cert.pem\n NOTE: If you perform these two commands in the same directory, the second one will overwrite the output of the first. It is advisable to run them in separate directories, or copy the cert and key to the Data Exchange file system (the next step below) before generating the next cert / key pair.
"},{"location":"tutorials/rotate_dx_certs/#install-the-new-certs-on-each-data-exchange-file-system","title":"Install the new certs on each Data Exchange File System","text":"For a dev environment created with the FireFly CLI, the certificate and key will be located in the /data directory on the Data Exchange node's file system. You can use the docker cp command to copy the file to the correct location, then set the file ownership correctly.
docker cp cert.pem dev_dataexchange_0:/data/cert.pem\ndocker exec dev_dataexchange_0 chown root:root /data/cert.pem\n NOTE: If your environment is not called dev you may need to change the beginning of the container name in the Docker commands listed in this guide.
peer-certs directory","text":"To clear out the old certs from the first Data Exchange node run:
docker exec dev_dataexchange_0 sh -c \"rm /data/peer-certs/*.pem\"\n To clear out the old certs from the second Data Exchange node run:
docker exec dev_dataexchange_1 sh -c \"rm /data/peer-certs/*.pem\"\n"},{"location":"tutorials/rotate_dx_certs/#restart-each-data-exchange-process","title":"Restart each Data Exchange process","text":"To restart your Data Exchange processes, run:
docker restart dev_dataexchange_0\n docker restart dev_dataexchange_1\n"},{"location":"tutorials/rotate_dx_certs/#patch-the-node-identity-using-the-firefly-api","title":"PATCH the node identity using the FireFly API","text":"The final step is to broadcast the new cert for each node, from the FireFly node that will be using that cert. You will need to lookup the UUID for the node identity in order to update it.
"},{"location":"tutorials/rotate_dx_certs/#request","title":"Request","text":"GET http://localhost:5000/api/v1/namespaces/default/identities
In the JSON response body, look for the node identity that belongs on this FireFly instance. Here is the node identity from an example stack:
...\n {\n \"id\": \"20da74a2-d4e6-4eaf-8506-e7cd205d8254\",\n \"did\": \"did:firefly:node/node_2b9630\",\n \"type\": \"node\",\n \"parent\": \"41e93d92-d0da-4e5a-9cee-adf33f017a60\",\n \"namespace\": \"default\",\n \"name\": \"node_2b9630\",\n \"profile\": {\n \"cert\": \"-----BEGIN CERTIFICATE-----\\nMIIC1DCCAbwCCQDa9x3wC7wepDANBgkqhkiG9w0BAQsFADAsMRcwFQYDVQQDDA5k\\nYXRhZXhjaGFuZ2VfMDERMA8GA1UECgwIbWVtYmVyXzAwHhcNMjMwMjA2MTQwMTEy\\nWhcNMjQwMjA2MTQwMTEyWjAsMRcwFQYDVQQDDA5kYXRhZXhjaGFuZ2VfMDERMA8G\\nA1UECgwIbWVtYmVyXzAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJ\\nSgtJw99V7EynvqxWdJkeiUlOg3y+JtJlhxGC//JLp+4sYCtOMriULNf5ouImxniR\\nO2vEd+LNdMuREN4oZdUHtJD4MM7lOFw/0ICNEPJ+oEoUTzOC0OK68sA+OCybeS2L\\nmLBu4yvWDkpufR8bxBJfBGarTAFl36ao1Eoogn4m9gmVrX+V5SOKUhyhlHZFkZNb\\ne0flwQmDMKg6qAbHf3j8cnrrZp26n68IGjwqySPFIRLFSz28zzMYtyzo4b9cF9NW\\nGxusMHsExX5gzlTjNacGx8Tlzwjfolt23D+WHhZX/gekOsFiV78mVjgJanE2ls6D\\n5ZlXi5iQSwm8dlmo9RxFAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAAwr4aAvQnXG\\nkO3xNO+7NGzbb/Nyck5udiQ3RmlZBEJSUsPCsWd4SBhH7LvgbT9ECuAEjgH+2Ip7\\nusd8CROr3sTb9t+7Krk+ljgZirkjq4j/mIRlqHcBJeBtylOz2p0oPsitlI8Yea2D\\nQ4/Xru6txUKNK+Yut3G9qvg/vm9TAwkNHSthzb26bI7s6lx9ZSuFbbG6mR+RQ+8A\\nU4AX1DVo5QyTwSi1lp0+pKFEgtutmWGYn8oT/ya+OLzj+l7Ul4HE/mEAnvECtA7r\\nOC8AEjC5T4gUsLt2IXW9a7lCgovjHjHIySQyqsdYBjkKSn5iw2LRovUWxT1GBvwH\\nFkTvCpHhgko=\\n-----END CERTIFICATE-----\\n\",\n \"endpoint\": \"https://dataexchange_0:3001\",\n \"id\": \"member_0/node_2b9630\"\n },\n \"messages\": {\n \"claim\": \"95da690b-bb05-4873-9478-942f607f363a\",\n \"verification\": null,\n \"update\": null\n },\n \"created\": \"2023-02-06T14:02:50.874319382Z\",\n \"updated\": \"2023-02-06T14:02:50.874319382Z\"\n },\n...\n Copy the UUID from the id field, and add that to the PATCH request. In this case it is 20da74a2-d4e6-4eaf-8506-e7cd205d8254.
Now we will send the new certificate to FireFly. Put the contents of your cert.pem file in the cert field.
NOTE: Usually the cert.pem file will contain line breaks which will not be handled correctly by JSON parsers. Be sure to replace those line breaks with \\n so that the cert field is all on one line as shown below.
PATCH http://localhost:5000/api/v1/namespaces/default/identities/20da74a2-d4e6-4eaf-8506-e7cd205d8254
{\n \"profile\": {\n \"cert\": \"-----BEGIN CERTIFICATE-----\\nMIIC1DCCAbwCCQDeKjPt3siRHzANBgkqhkiG9w0BAQsFADAsMRcwFQYDVQQDDA5k\\nYXRhZXhjaGFuZ2VfMDERMA8GA1UECgwIbWVtYmVyXzAwHhcNMjMwMjA2MTYxNTU3\\nWhcNMjQwMjA2MTYxNTU3WjAsMRcwFQYDVQQDDA5kYXRhZXhjaGFuZ2VfMDERMA8G\\nA1UECgwIbWVtYmVyXzAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCy\\nEJaqDskxhkPHmCqj5Mxq+9QX1ec19fulh9Zvp8dLA6bfeg4fdQ9Ha7APG6w/0K8S\\nEaXOflSpXb0oKMe42amIqwvQaqTOA97HIe5R2HZxA1RWqXf+AueowWgI4crxr2M0\\nZCiXHyiZKpB8nzO+bdO9AKeYnzbhCsO0gq4LPOgpPjYkHPKhabeMVZilZypDVOGk\\nLU+ReQoVEZ+P+t0B/9v+5IQ2yyH41n5dh6lKv4mIaC1OBtLc+Pd6DtbRb7pijkgo\\n+LyqSdl24RHhSgZcTtMQfoRIVzvMkhF5SiJczOC4R8hmt62jtWadO4D5ZtJ7N37/\\noAG/7KJO4HbByVf4xOcDAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAKWbQftV05Fc\\niwVtZpyvP2l4BvKXvMOyg4GKcnBSZol7UwCNrjwYSjqgqyuedTSZXHNhGFxQbfAC\\n94H25bDhWOfd7JH2D7E6RRe3eD9ouDnrt+de7JulsNsFK23IM4Nz5mRhRMVy/5p5\\n9yrsdW+5MXKWgz9569TIjiciCf0JqB7iVPwRrQyz5gqOiPf81PlyaMDeaH9wXtra\\n/1ZRipXiGiNroSPFrQjIVLKWdmnhWKWjFXsiijdSV/5E+8dBb3t//kEZ8UWfBrc4\\nfYVuZ8SJtm2ZzBmit3HFatDlFTE8PanRf/UDALUp4p6YKJ8NE2T8g/uDE0ee1pnF\\nIDsrC1GX7rs=\\n-----END CERTIFICATE-----\\n\",\n \"endpoint\": \"https://dataexchange_0:3001\",\n \"id\": \"member_0\"\n }\n}\n"},{"location":"tutorials/rotate_dx_certs/#response_1","title":"Response","text":"{\n \"id\": \"20da74a2-d4e6-4eaf-8506-e7cd205d8254\",\n \"did\": \"did:firefly:node/node_2b9630\",\n \"type\": \"node\",\n \"parent\": \"41e93d92-d0da-4e5a-9cee-adf33f017a60\",\n \"namespace\": \"default\",\n \"name\": \"node_2b9630\",\n \"profile\": {\n \"cert\": \"-----BEGIN CERTIFICATE-----\\nMIIC1DCCAbwCCQDeKjPt3siRHzANBgkqhkiG9w0BAQsFADAsMRcwFQYDVQQDDA5k\\nYXRhZXhjaGFuZ2VfMDERMA8GA1UECgwIbWVtYmVyXzAwHhcNMjMwMjA2MTYxNTU3\\nWhcNMjQwMjA2MTYxNTU3WjAsMRcwFQYDVQQDDA5kYXRhZXhjaGFuZ2VfMDERMA8G\\nA1UECgwIbWVtYmVyXzAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCy\\nEJaqDskxhkPHmCqj5Mxq+9QX1ec19fulh9Zvp8dLA6bfeg4fdQ9Ha7APG6w/0K8S\\nEaXOflSpXb0oKMe42amIqwvQaqTOA97HIe5R2HZxA1RWqXf+AueowWgI4crxr2M0\\nZCiXHyiZKpB8nzO+bdO9AKeYnzbhCsO0gq4LPOgpPjYkHPKhabeMVZilZypDVOGk\\nLU+ReQoVEZ+P+t0B/9v+5IQ2yyH41n5dh6lKv4mIaC1OBtLc+Pd6DtbRb7pijkgo\\n+LyqSdl24RHhSgZcTtMQfoRIVzvMkhF5SiJczOC4R8hmt62jtWadO4D5ZtJ7N37/\\noAG/7KJO4HbByVf4xOcDAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAKWbQftV05Fc\\niwVtZpyvP2l4BvKXvMOyg4GKcnBSZol7UwCNrjwYSjqgqyuedTSZXHNhGFxQbfAC\\n94H25bDhWOfd7JH2D7E6RRe3eD9ouDnrt+de7JulsNsFK23IM4Nz5mRhRMVy/5p5\\n9yrsdW+5MXKWgz9569TIjiciCf0JqB7iVPwRrQyz5gqOiPf81PlyaMDeaH9wXtra\\n/1ZRipXiGiNroSPFrQjIVLKWdmnhWKWjFXsiijdSV/5E+8dBb3t//kEZ8UWfBrc4\\nfYVuZ8SJtm2ZzBmit3HFatDlFTE8PanRf/UDALUp4p6YKJ8NE2T8g/uDE0ee1pnF\\nIDsrC1GX7rs=\\n-----END CERTIFICATE-----\\n\",\n \"endpoint\": \"https://dataexchange_0:3001\",\n \"id\": \"member_0\"\n },\n \"messages\": {\n \"claim\": \"95da690b-bb05-4873-9478-942f607f363a\",\n \"verification\": null,\n \"update\": \"5782cd7c-7643-4d7f-811b-02765a7aaec5\"\n },\n \"created\": \"2023-02-06T14:02:50.874319382Z\",\n \"updated\": \"2023-02-06T14:02:50.874319382Z\"\n}\n Repeat these requests for the second member/node running on port 5001. After that you should be back up and running with your new certs, and you should be able to send private messages again.
Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the Arbitrum Nitro Goerli Rollup Testnet.
"},{"location":"tutorials/chains/arbitrum/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/arbitrum/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Binance Smart Chain testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Arbitrum testnet, we will use command line flags to customize the following settings:
arbitrum with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here421613 (the correct ID for the Binance Smart Chain testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum arbitrum 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 421613 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/arbitrum/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start arbitrum\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs arbitrum\n"},{"location":"tutorials/chains/arbitrum/#get-some-aribitrum","title":"Get some Aribitrum","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list arbitrum\n[\n {\n \"address\": \"0x225764d1be1f137be23ddfc426b819512b5d0f3e\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Next, check out this article https://medium.com/offchainlabs/new-g%C3%B6rli-testnet-and-getting-rinkeby-ready-for-nitro-3ff590448053 and follow the instructions to send a tweet to the developers. Make sure to change the address to the one in the CLI.
"},{"location":"tutorials/chains/arbitrum/#confirm-the-transaction-on-bscscan","title":"Confirm the transaction on Bscscan","text":"You should be able to go lookup your account on https://goerli-rollup-explorer.arbitrum.io/ and see that you now have a balance of 0.001 ether. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/arbitrum/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Binance Smart Chain, please see the Arbitrum docs for instructions using various tools.
"},{"location":"tutorials/chains/avalanche/","title":"Avalanche","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the Avalanche C-Chain Fuji testnet.
"},{"location":"tutorials/chains/avalanche/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/avalanche/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Avalanche testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Avalanche Fuji testnet, we will use command line flags to customize the following settings:
avalanche with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here43113 (the correct ID for the Avalanche Fuji testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum avalanche 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 43113 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/avalanche/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start avalanche\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs avalanche\n"},{"location":"tutorials/chains/avalanche/#get-some-avax","title":"Get some AVAX","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some AVAX, the native token for Avalanche.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list avalanche\n[\n {\n \"address\": \"0x6688e14f719766cc2a5856ccef63b069703d86f7\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://faucet.avax.network/ and paste the address in the form. Make sure that the network you select is Fuji (C-Chain). Click the Request 2 AVAX button.
"},{"location":"tutorials/chains/avalanche/#confirm-the-transaction-on-snowtrace","title":"Confirm the transaction on Snowtrace","text":"You should be able to go lookup your account on Snowtrace for the Fuji testnet and see that you now have a balance of 2 AVAX. Simply paste in your account address or transaction ID to search for it.
"},{"location":"tutorials/chains/avalanche/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Avalanche, please see the Avalanche docs for instructions using various tools.
"},{"location":"tutorials/chains/binance_smart_chain/","title":"Binance Smart Chain","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Binance Smart Chain testnet.
"},{"location":"tutorials/chains/binance_smart_chain/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/binance_smart_chain/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Binance Smart Chain testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Binance Smart Chain testnet, we will use command line flags to customize the following settings:
bsc with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here97 (the correct ID for the Binance Smart Chain testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum bsc 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 97 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/binance_smart_chain/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start bsc\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs bsc\n"},{"location":"tutorials/chains/binance_smart_chain/#get-some-bnb","title":"Get some BNB","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some BNB, the native token for Binance Smart Chain.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list bsc\n[\n {\n \"address\": \"0x235461d246ab95d367925b4e91bd2755a921fdd8\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://testnet.binance.org/faucet-smart and paste the address in the form. Go through the CAPTCH form and click the Give me BNB button.
"},{"location":"tutorials/chains/binance_smart_chain/#confirm-the-transaction-on-bscscan","title":"Confirm the transaction on Bscscan","text":"You should be able to go lookup your account on Bscscan for the testnet https://testnet.bscscan.com/ and see that you now have a balance of 0.5 BNB. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/binance_smart_chain/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Binance Smart Chain, please see the Binance docs for instructions using various tools.
"},{"location":"tutorials/chains/fabric_test_network/","title":"Work with Fabric-Samples Test Network","text":"This guide will walk you through the steps to create a local FireFly development environment and connect it to the Fabric Test Network from the Fabric Samples repo
"},{"location":"tutorials/chains/fabric_test_network/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/fabric_test_network/#start-fabric-test-network-with-fabric-ca","title":"Start Fabric Test Network with Fabric CA","text":"For details about the Fabric Test Network and how to set it up, please see the Fabric Samples repo. The one important detail is that you need to start up the Test Network with a Fabric CA. This is because Fabconnect will use the Fabric CA to create an identity for its FireFly node to use. To start up the network with the CA, and create a new channel called mychannel run:
./network.sh up createChannel -ca\n NOTE: If you already have the Test Network running, you will need to bring it down first, by running: ./network.sh down
Next we will need to package and deploy the FireFly chaincode to mychannel in our new network. For more details on packaging and deploying chaincode, please see the Fabric chaincode lifecycle documentation. If you already have the FireFly repo cloned in the same directory as your fabric-samples repo, you can run the following script from your test-network directory:
NOTE: This script is provided as a convenience only, and you are not required to use it. You are welcome to package and deploy the chaincode to your test-network any way you would like.
#!/bin/bash\n\n# This file should be run from the test-network directory in the fabric-samples repo\n# It also assumes that you have the firefly repo checked out at the same level as the fabric-samples directory\n# It also assumes that the test-network is up and running and a channel named 'mychannel' has already been created\n\ncd ../../firefly/smart_contracts/fabric/firefly-go\nGO111MODULE=on go mod vendor\ncd ../../../../fabric-samples/test-network\n\nexport PATH=${PWD}/../bin:$PATH\nexport FABRIC_CFG_PATH=$PWD/../config/\n\npeer lifecycle chaincode package firefly.tar.gz --path ../../firefly/smart_contracts/fabric/firefly-go --lang golang --label firefly_1.0\n\nexport CORE_PEER_TLS_ENABLED=true\nexport CORE_PEER_LOCALMSPID=\"Org1MSP\"\nexport CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt\nexport CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp\nexport CORE_PEER_ADDRESS=localhost:7051\n\npeer lifecycle chaincode install firefly.tar.gz\n\nexport CORE_PEER_LOCALMSPID=\"Org2MSP\"\nexport CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt\nexport CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp\nexport CORE_PEER_ADDRESS=localhost:9051\n\npeer lifecycle chaincode install firefly.tar.gz\n\nexport CC_PACKAGE_ID=$(peer lifecycle chaincode queryinstalled --output json | jq --raw-output \".installed_chaincodes[0].package_id\")\n\npeer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --channelID mychannel --name firefly --version 1.0 --package-id $CC_PACKAGE_ID --sequence 1 --tls --cafile \"${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem\"\n\nexport CORE_PEER_LOCALMSPID=\"Org1MSP\"\nexport CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp\nexport CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt\nexport CORE_PEER_ADDRESS=localhost:7051\n\npeer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --channelID mychannel --name firefly --version 1.0 --package-id $CC_PACKAGE_ID --sequence 1 --tls --cafile \"${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem\"\n\npeer lifecycle chaincode commit -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --channelID mychannel --name firefly --version 1.0 --sequence 1 --tls --cafile \"${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem\" --peerAddresses localhost:7051 --tlsRootCertFiles \"${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt\" --peerAddresses localhost:9051 --tlsRootCertFiles \"${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt\"\n"},{"location":"tutorials/chains/fabric_test_network/#create-ccpyml-documents","title":"Create ccp.yml documents","text":"Each FireFly Supernode (specifically the Fabconnect instance in each) will need to know how to connect to the Fabric network. Fabconnect will use a Fabric Connection Profile which describes the network and tells it where the certs and keys are that it needs. Below is a ccp.yml for each organization. You will need to fill in one line by replacing the string FILL_IN_KEY_NAME_HERE, because the file name of the private key for each user is randomly generated.
Create a new file at ~/org1_ccp.yml with the contents below. Replace the string FILL_IN_KEY_NAME_HERE with the filename in your fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore directory.
certificateAuthorities:\n org1.example.com:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp/tlscacerts/ca.crt\n url: https://ca_org1:7054\n grpcOptions:\n ssl-target-name-override: org1.example.com\n registrar:\n enrollId: admin\n enrollSecret: adminpw\nchannels:\n mychannel:\n orderers:\n - fabric_orderer\n peers:\n fabric_peer:\n chaincodeQuery: true\n endorsingPeer: true\n eventSource: true\n ledgerQuery: true\nclient:\n BCCSP:\n security:\n default:\n provider: SW\n enabled: true\n hashAlgorithm: SHA2\n level: 256\n softVerify: true\n credentialStore:\n cryptoStore:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp\n cryptoconfig:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp\n logging:\n level: info\n organization: org1.example.com\n tlsCerts:\n client:\n cert:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/signcerts/cert.pem\n key:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore/FILL_IN_KEY_NAME_HERE\norderers:\n fabric_orderer:\n tlsCACerts:\n path: /etc/firefly/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/tlscacerts/tls-localhost-9054-ca-orderer.pem\n url: grpcs://orderer.example.com:7050\norganizations:\n org1.example.com:\n certificateAuthorities:\n - org1.example.com\n cryptoPath: /tmp/msp\n mspid: Org1MSP\n peers:\n - fabric_peer\npeers:\n fabric_peer:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/tlscacerts/tls-localhost-7054-ca-org1.pem\n url: grpcs://peer0.org1.example.com:7051\nversion: 1.1.0%\n"},{"location":"tutorials/chains/fabric_test_network/#organization-2-connection-profile","title":"Organization 2 connection profile","text":"Create a new file at ~/org2_ccp.yml with the contents below. Replace the string FILL_IN_KEY_NAME_HERE with the filename in your fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/keystore directory.
certificateAuthorities:\n org2.example.com:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp/tlscacerts/ca.crt\n url: https://ca_org2:8054\n grpcOptions:\n ssl-target-name-override: org2.example.com\n registrar:\n enrollId: admin\n enrollSecret: adminpw\nchannels:\n mychannel:\n orderers:\n - fabric_orderer\n peers:\n fabric_peer:\n chaincodeQuery: true\n endorsingPeer: true\n eventSource: true\n ledgerQuery: true\nclient:\n BCCSP:\n security:\n default:\n provider: SW\n enabled: true\n hashAlgorithm: SHA2\n level: 256\n softVerify: true\n credentialStore:\n cryptoStore:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp\n cryptoconfig:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp\n logging:\n level: info\n organization: org2.example.com\n tlsCerts:\n client:\n cert:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/signcerts/cert.pem\n key:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/keystore/FILL_IN_KEY_NAME_HERE\norderers:\n fabric_orderer:\n tlsCACerts:\n path: /etc/firefly/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/tlscacerts/tls-localhost-9054-ca-orderer.pem\n url: grpcs://orderer.example.com:7050\norganizations:\n org2.example.com:\n certificateAuthorities:\n - org2.example.com\n cryptoPath: /tmp/msp\n mspid: Org2MSP\n peers:\n - fabric_peer\npeers:\n fabric_peer:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/tlscacerts/tls-localhost-8054-ca-org2.pem\n url: grpcs://peer0.org2.example.com:9051\nversion: 1.1.0%\n"},{"location":"tutorials/chains/fabric_test_network/#create-the-firefly-stack","title":"Create the FireFly stack","text":"Now we can create a FireFly stack and pass in these files as command line flags.
NOTE: The following command should be run in the test-network directory as it includes a relative path to the organizations directory containing each org's MSP.
ff init fabric dev \\\n --ccp \"${HOME}/org1_ccp.yml\" \\\n --msp \"organizations\" \\\n --ccp \"${HOME}/org2_ccp.yml\" \\\n --msp \"organizations\" \\\n --channel mychannel \\\n --chaincode firefly\n"},{"location":"tutorials/chains/fabric_test_network/#edit-docker-composeoverrideyml","title":"Edit docker-compose.override.yml","text":"The last step before starting up FireFly is to make sure that our FireFly containers have networking access to the Fabric containers. Because these are in two different Docker Compose networks by default, normally the containers would not be able to connect directly. We can fix this by instructing Docker to also attach our FireFly containers to the Fabric test network Docker Compose network. The easiest way to do that is to edit ~/.firefly/stacks/dev/docker-compose.override.yml and set its contents to the following:
# Add custom config overrides here\n# See https://docs.docker.com/compose/extends\nversion: \"2.1\"\nnetworks:\n default:\n name: fabric_test\n external: true\n"},{"location":"tutorials/chains/fabric_test_network/#start-firefly-stack","title":"Start FireFly stack","text":"Now we can start up FireFly!
ff start dev\n After everything starts up, you should have two FireFly nodes that are each mapped to an Organization in your Fabric network. You can that they each use separate signing keys for their Org on messages that each FireFly node sends.
"},{"location":"tutorials/chains/fabric_test_network/#connecting-to-a-remote-fabric-network","title":"Connecting to a remote Fabric Network","text":"This same guide can be adapted to connect to a remote Fabric network running somewhere else. They key takeaways are:
ff initff initThere are quite a few moving parts in this guide and if steps are missed or done out of order it can cause problems. Below are some of the common situations that you might run into while following this guide, and solutions for each.
You may see a message something along the lines of:
ERROR: for firefly_core_0 Container \"bc04521372aa\" is unhealthy.\nEncountered errors while bringing up the project.\n In this case, we need to look at the container logs to get more detail about what happened. To do this, we can run ff start and tell it not to clean up the stack after the failure, to let you inspect what went wrong. To do that, you can run:
ff start dev --verbose --no-rollback\n Then we could run docker logs <container_name> to see the logs for that container.
Error: http://127.0.0.1:5102/identities [500] {\"error\":\"enroll failed: enroll failed: POST failure of request: POST https://ca_org1:7054/enroll\\n{\\\"hosts\\\":null,\\\"certificate_request\\\":\\\"-----BEGIN CERTIFICATE REQUEST-----\\\\nMIH0MIGcAgEAMBAxDjAMBgNVBAMTBWFkbWluMFkwEwYHKoZIzj0CAQYIKoZIzj0D\\\\nAQcDQgAE7qJZ5nGt/kxU9IvrEb7EmgNIgn9xXoQUJLl1+U9nXdWB9cnxcmoitnvy\\\\nYN63kbBuUh0z21vOmO8GLD3QxaRaD6AqMCgGCSqGSIb3DQEJDjEbMBkwFwYDVR0R\\\\nBBAwDoIMMGQ4NGJhZWIwZGY0MAoGCCqGSM49BAMCA0cAMEQCIBcWb127dVxm/80K\\\\nB2LtenAY/Jtb2FbZczolrXNCKq+LAiAcGEJ6Mx8LVaPzuSP4uGpEoty6+bEErc5r\\\\nHVER+0aXiQ==\\\\n-----END CERTIFICATE REQUEST-----\\\\n\\\",\\\"profile\\\":\\\"\\\",\\\"crl_override\\\":\\\"\\\",\\\"label\\\":\\\"\\\",\\\"NotBefore\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"NotAfter\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"ReturnPrecert\\\":false,\\\"CAName\\\":\\\"\\\"}: Post \\\"https://ca_org1:7054/enroll\\\": dial tcp: lookup ca_org1 on 127.0.0.11:53: no such host\"}\n If you see something in your logs that looks like the above, there could be a couple issues:
ccp.yml. Check the ccp.yml for that member and make sure the hostnames are correct.docker-compose.override.yml file to make sure you added the fabric_test network as instructed above.User credentials store creation failed. Failed to load identity configurations: failed to create identity config from backends: failed to load client TLSConfig : failed to load client key: failed to load pem bytes from path /etc/firefly/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/cfc50311e2204f232cfdfaf4eba7731279f2366ec291ca1c1781e2bf7bc75529_sk: open /etc/firefly/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/cfc50311e2204f232cfdfaf4eba7731279f2366ec291ca1c1781e2bf7bc75529_sk: no such file or directory\n If you see something in your logs that looks like the above, it's likely that your private key file name is not correct in your ccp.yml file for that particular member. Check your ccp.yml and make sure all the files listed there exist in your organizations directory.
Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Moonbeam Alpha testnet.
"},{"location":"tutorials/chains/moonbeam/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/moonbeam/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Moonbeam testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Moonbeam Alpha testnet, we will use command line flags to customize the following settings:
moonbeam with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here1287 (the correct ID for the Moonbeam Alpha testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum moonbeam 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 1287 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/moonbeam/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start moonbeam\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs moonbeam\n"},{"location":"tutorials/chains/moonbeam/#get-some-dev","title":"Get some DEV","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some DEV, the native token for Moonbeam.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list moonbeam\n[\n {\n \"address\": \"0x02d42c32a97c894486afbc7b717edff50c70b292\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://apps.moonbeam.network/moonbase-alpha/faucet/ and paste the address in the form. Click the Submit button.
"},{"location":"tutorials/chains/moonbeam/#confirm-the-transaction-on-moonscan","title":"Confirm the transaction on Moonscan","text":"You should be able to go lookup your account on Moonscan for the Moonbase Alpha testnet and see that you now have a sufficient balance of DEV. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/moonbeam/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on interacting with the Moonbeam Alpha testnet, please see the Moonbeam docs.
"},{"location":"tutorials/chains/optimism/","title":"Optimism","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the Optimism Goerli testnet.
"},{"location":"tutorials/chains/optimism/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/optimism/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Optimism testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Optimism testnet, we will use command line flags to customize the following settings:
optimism with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here420 (the correct ID for the Optimism testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum optimism 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 420 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/optimism/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start optimism\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs optimism\n"},{"location":"tutorials/chains/optimism/#get-some-optimism","title":"Get some Optimism","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some OP, the native token for Optimism.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list optimism\n[\n {\n \"address\": \"0x235461d246ab95d367925b4e91bd2755a921fdd8\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://optimismfaucet.xyz/. You will need to login to your Github account and paste the address in the form.
"},{"location":"tutorials/chains/optimism/#confirm-the-transaction-on-blockcscout","title":"Confirm the transaction on Blockcscout","text":"You should be able to go lookup your account on Blockscout for Optimism testnet https://blockscout.com/optimism/goerli and see that you now have a balance of 100 OP. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/optimism/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Optimism, please see the Optimism docs for instructions using various tools.
"},{"location":"tutorials/chains/polygon_testnet/","title":"Polygon Testnet","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Polygon Mumbai testnet.
"},{"location":"tutorials/chains/polygon_testnet/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/polygon_testnet/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Polygon testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Polygon Mumbai testnet, we will use command line flags to customize the following settings:
polygon with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here80001 (the correct ID for the Polygon Mumbai testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum polygon 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 80001 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/polygon_testnet/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start polygon\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs polygon\n"},{"location":"tutorials/chains/polygon_testnet/#get-some-matic","title":"Get some MATIC","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some MATIC, the native token for Polygon.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list polygon\n[\n {\n \"address\": \"0x02d42c32a97c894486afbc7b717edff50c70b292\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://faucet.polygon.technology/ and paste the address in the form. Click the Submit button, and then Confirm.
"},{"location":"tutorials/chains/polygon_testnet/#confirm-the-transaction-on-polygonscan","title":"Confirm the transaction on Polygonscan","text":"You should be able to go lookup your account on Polygonscan for the Mumbai testnet and see that you now have a balance of 0.2 MATIC. Simply paste in your account address to search for it.
You can also click on the Internal Txns tab from you account page to see the actual transfer of the MATIC from the faucet.
"},{"location":"tutorials/chains/polygon_testnet/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Polygon, please see the Polygon docs for instructions using various tools.
"},{"location":"tutorials/chains/tezos_testnet/","title":"Tezos Testnet","text":"This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Tezos Ghostnet testnet.
"},{"location":"tutorials/chains/tezos_testnet/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/tezos_testnet/#set-up-the-transaction-signing-service","title":"Set up the transaction signing service","text":"Signatory service allows to work with many different key-management systems.\\ By default, FF uses local signing option.\\ However, it is also possible to configure the transaction signing service using key management systems such as: AWS/Google/Azure KMS, HCP Vault, etc.
NOTE: The default option is not secure and is mainly used for development and demo purposes. Therefore, for the production, use the selected KMS.\\ The full list can be found here.
"},{"location":"tutorials/chains/tezos_testnet/#creating-a-new-stack","title":"Creating a new stack","text":"To create a local FireFly development stack and connect it to the Tezos Ghostnet testnet, we will use command line flags to customize the following settings:
tezos with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium hereTo do this, run the following command:
ff init tezos dev 1 \\\n --multiparty=false \\\n --remote-node-url <selected RPC endpoint>\n NOTE: The public RPC nodes may have limitations or may not support all FF required RPC endpoints. Therefore it's not recommended to use ones for production and you may need to run own node or use third-party vendors.
"},{"location":"tutorials/chains/tezos_testnet/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start dev\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs dev\n"},{"location":"tutorials/chains/tezos_testnet/#get-some-xtz","title":"Get some XTZ","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay transaction fee. A testnet faucet can give us some XTZ, the native token for Tezos.
First, you need to get an account address, which was created during signer set up step.\\ To check that, you can run:
ff accounts list dev\n[\n {\n \"address\": \"tz1cuFw1E2Mn2bVS8q8d7QoCb6FXC18JivSp\",\n \"privateKey\": \"...\"\n }\n]\n After that, go to Tezos Ghostnet Faucet and paste the address in the form and click the Request button.
"},{"location":"tutorials/chains/tezos_testnet/#confirm-the-transaction-on-tzstats","title":"Confirm the transaction on TzStats","text":"You should be able to go lookup your account on TzStats for the Ghostnet testnet and see that you now have a balance of 100 XTZ (or 2001 XTZ accordingly). Simply paste in your account address to search for it.
On the Transfers tab from you account page you will see the actual transfer of the XTZ from the faucet.
"},{"location":"tutorials/chains/tezos_testnet/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Tezos, please see the Tezos docs for instructions using various tools.
"},{"location":"tutorials/chains/zksync_testnet/","title":"zkSync Testnet","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the zkSync testnet.
"},{"location":"tutorials/chains/zksync_testnet/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/zksync_testnet/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the zkSync testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the zkSync testnet, we will use command line flags to customize the following settings:
zkSync with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium hereevmconnect blockchain connectorhttps://zksync2-testnet.zksync.dev280 (the correct ID for the zkSync testnet)evmconnect config fileTo do this, run the following command:
ff init zksync 1\\\n --multiparty=false \\\n -b ethereum \\\n -c evmconnect \\\n -n remote-rpc \\\n --remote-node-url https://zksync2-testnet.zksync.dev\\\n --chain-id 280 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/zksync_testnet/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start zksync\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs zksync\n"},{"location":"tutorials/chains/zksync_testnet/#get-some-eth","title":"Get some ETH","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. zkSync does not currently have its own native token and instead uses Ethereum for transaction. A testnet faucet can give us some ETH.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list zkSync\n[\n {\n \"address\": \"0x8cf4fd38b2d56a905113d23b5a7131f0269d8611\",\n \"privateKey\": \"...\"\n }\n]\n Copy your zkSync address and go to the Goerli Ethereum faucet and paste the address in the form. Click the Request Tokens button. Note that any Goerli Ethereum faucet will work.
"},{"location":"tutorials/chains/zksync_testnet/#confirm-the-transaction-on-the-etherscan-explorer","title":"Confirm the transaction on the Etherscan Explorer","text":"You should be able to go lookup your account at https://etherscan.io/ and see that you now have a balance of 0.025 ETH. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/zksync_testnet/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to zkSync, please see the zkSync docs for instructions using various tools.
"},{"location":"tutorials/custom_contracts/","title":"Work with custom smart contracts","text":""},{"location":"tutorials/custom_contracts/#quick-reference","title":"Quick reference","text":"Almost all blockchain platforms offer the ability to execute smart contracts on-chain in order to manage states on the shared ledger. FireFly provides support to use RESTful APIs to interact with the smart contracts deployed in the target blockchains, and listening to events via websocket.
FireFly's unified API creates a consistent application experience regardless of the specific underlying blockchain implementation. It also provides developer-friendly features like automatic OpenAPI Specification generation for smart contracts, plus a built-in Swagger UI.
"},{"location":"tutorials/custom_contracts/#key-concepts","title":"Key concepts","text":"FireFly defines the following constructs to support custom smart contracts:
"},{"location":"tutorials/custom_contracts/#contract-interface","title":"Contract Interface","text":"FireFly defines a common, blockchain agnostic way to describe smart contracts. This is referred to as a Contract Interface. A contract interface is written in the FireFly Interface (FFI) format. It is a simple JSON document that has a name, a namespace, a version, a list of methods, and a list of events.
For more details, you can also have a look at the Reference page for the FireFly Interface Format.
For blockchains that offer a DSL describing the smart contract interface, such as Ethereum's ABI (Application Binary Interface), FireFly offers an API to convert the DSL into the FFI format.
NOTE: Contract interfaces are scoped to a namespace. Within a namespace each contract interface must have a unique name and version combination. The same name and version combination can exist in different namespaces simultaneously.
"},{"location":"tutorials/custom_contracts/#http-api","title":"HTTP API","text":"Based on a Contract Interface, FireFly further defines an HTTP API for the smart contract, which is complete with an OpenAPI Specification and the Swagger UI. An HTTP API defines an /invoke root path to submit transactions, and a /query root path to send query requests to read the state back out.
How the invoke vs. query requests get interpreted into the native blockchain requests are specific to the blockchain's connector. For instance, the Ethereum connector translates /invoke calls to eth_sendTransaction JSON-RPC requests, while /query calls are translated into eth_call JSON-RPC requests. On the other hand, the Fabric connector translates /invoke calls to the multiple requests required to submit a transaction to a Fabric channel (which first collects endorsements from peer nodes, and then sends the assembled transaction payload to an orderer, for details please refer to the Fabric documentation).
Regardless of a blockchain's specific design, transaction processing are always asynchronous. This means a transaction is submitted to the network, at which point the submitting client gets an acknowledgement that it has been accepted for further processing. The client then listens for notifications by the blockchain when the transaction gets committed to the blockchain's ledger.
FireFly defines event listeners to allow the client application to specify the relevant blockchain events to keep track of. A client application can then receive the notifications from FireFly via an event subscription.
"},{"location":"tutorials/custom_contracts/#event-subscription","title":"Event Subscription","text":"An event listener in FireFly tracks specific blockchain events, while an event subscription directs FireFly to send those events to the client application. Each subscription creates a stream of events that can be delivered to the client with various delivery options, ensuring an at-least-once delivery guarantee.
This is exactly the same as listening for any other events from FireFly. For more details on how Subscriptions work in FireFly you can read the Getting Started guide to Listen for events.
"},{"location":"tutorials/custom_contracts/#custom-onchain-logic-async-programming-in-firefly","title":"Custom onchain logic async programming in FireFly","text":"Like the rest of FireFly, custom onchain logic support are implemented with an asynchronous programming model. The key concepts here are:
blockchain_event_received when this happens.This guide describes the steps to deploy a smart contract to an Ethereum blockchain and use FireFly to interact with it in order to submit transactions, query for states and listening for events.
NOTE: This guide assumes that you are running a local FireFly stack with at least 2 members and an Ethereum blockchain created by the FireFly CLI. If you need help getting that set up, please see the Getting Started guide to Start your environment.
"},{"location":"tutorials/custom_contracts/ethereum/#example-smart-contract","title":"Example smart contract","text":"For this tutorial, we will be using a well known, but slightly modified smart contract called SimpleStorage, and will be using this contract on an Ethereum blockchain. As the name implies, it's a very simple contract which stores an unsigned 256 bit integer, emits and event when the value is updated, and allows you to retrieve the current value.
Here is the source for this contract:
// SPDX-License-Identifier: Apache-2.0\npragma solidity ^0.8.10;\n\n// Declares a new contract\ncontract SimpleStorage {\n // Storage. Persists in between transactions\n uint256 x;\n\n // Allows the unsigned integer stored to be changed\n function set(uint256 newValue) public {\n x = newValue;\n emit Changed(msg.sender, newValue);\n }\n\n // Returns the currently stored unsigned integer\n function get() public view returns (uint256) {\n return x;\n }\n\n event Changed(address indexed from, uint256 value);\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#contract-deployment","title":"Contract deployment","text":"If you need to deploy an Ethereum smart contract with a signing key that FireFly will use for submitting future transactions it is recommended to use FireFly's built in contract deployment API. This is useful in many cases. For example, you may want to deploy a token contract and have FireFly mint some tokens. Many token contracts only allow the contract deployer to mint, so the contract would need to be deployed with a FireFly signing key.
You will need compile the contract yourself using solc or some other tool. After you have compiled the contract, look in the JSON output file for the fields to build the request below.
"},{"location":"tutorials/custom_contracts/ethereum/#request","title":"Request","text":"Field Descriptionkey The signing key to use to dpeloy the contract. If omitted, the namespaces's default signing key will be used. contract The compiled bytecode for your smart contract. It should be either a hex encded string or Base64. definition The full ABI JSON array from your compiled JSON file. Copy the entire value of the abi field from the [ to the ]. input An ordered list of constructor arguments. Some contracts may not require any (such as this example). POST http://localhost:5000/api/v1/namespaces/default/contracts/deploy
{\n \"contract\": \"608060405234801561001057600080fd5b5061019e806100206000396000f3fe608060405234801561001057600080fd5b50600436106100365760003560e01c806360fe47b11461003b5780636d4ce63c14610057575b600080fd5b61005560048036038101906100509190610111565b610075565b005b61005f6100cd565b60405161006c919061014d565b60405180910390f35b806000819055503373ffffffffffffffffffffffffffffffffffffffff167fb52dda022b6c1a1f40905a85f257f689aa5d69d850e49cf939d688fbe5af5946826040516100c2919061014d565b60405180910390a250565b60008054905090565b600080fd5b6000819050919050565b6100ee816100db565b81146100f957600080fd5b50565b60008135905061010b816100e5565b92915050565b600060208284031215610127576101266100d6565b5b6000610135848285016100fc565b91505092915050565b610147816100db565b82525050565b6000602082019050610162600083018461013e565b9291505056fea2646970667358221220e6cbd7725b98b234d07bc1823b60ac065b567c6645d15c8f8f6986e5fa5317c664736f6c634300080b0033\",\n \"definition\": [\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n },\n {\n \"indexed\": false,\n \"internalType\": \"uint256\",\n \"name\": \"value\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"Changed\",\n \"type\": \"event\"\n },\n {\n \"inputs\": [],\n \"name\": \"get\",\n \"outputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n }\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"newValue\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"set\",\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n }\n ],\n \"input\": []\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response","title":"Response","text":"{\n \"id\": \"aa155a3c-2591-410e-bc9d-68ae7de34689\",\n \"namespace\": \"default\",\n \"tx\": \"4712ffb3-cc1a-4a91-aef2-206ac068ba6f\",\n \"type\": \"blockchain_deploy\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n \"contract\": \"608060405234801561001057600080fd5b5061019e806100206000396000f3fe608060405234801561001057600080fd5b50600436106100365760003560e01c806360fe47b11461003b5780636d4ce63c14610057575b600080fd5b61005560048036038101906100509190610111565b610075565b005b61005f6100cd565b60405161006c919061014d565b60405180910390f35b806000819055503373ffffffffffffffffffffffffffffffffffffffff167fb52dda022b6c1a1f40905a85f257f689aa5d69d850e49cf939d688fbe5af5946826040516100c2919061014d565b60405180910390a250565b60008054905090565b600080fd5b6000819050919050565b6100ee816100db565b81146100f957600080fd5b50565b60008135905061010b816100e5565b92915050565b600060208284031215610127576101266100d6565b5b6000610135848285016100fc565b91505092915050565b610147816100db565b82525050565b6000602082019050610162600083018461013e565b9291505056fea2646970667358221220e6cbd7725b98b234d07bc1823b60ac065b567c6645d15c8f8f6986e5fa5317c664736f6c634300080b0033\",\n \"definition\": [\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n },\n {\n \"indexed\": false,\n \"internalType\": \"uint256\",\n \"name\": \"value\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"Changed\",\n \"type\": \"event\"\n },\n {\n \"inputs\": [],\n \"name\": \"get\",\n \"outputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n }\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"newValue\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"set\",\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n }\n ],\n \"input\": [],\n \"key\": \"0xddd93a452bfc8d3e62bbc60c243046e4d0cb971b\",\n \"options\": null\n },\n \"output\": {\n \"headers\": {\n \"requestId\": \"default:aa155a3c-2591-410e-bc9d-68ae7de34689\",\n \"type\": \"TransactionSuccess\"\n },\n \"contractLocation\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"protocolId\": \"000000000024/000000\",\n \"transactionHash\": \"0x32d1144091877266d7f0426e48db157e7d1a857c62e6f488319bb09243f0f851\"\n },\n \"created\": \"2023-02-03T15:42:52.750277Z\",\n \"updated\": \"2023-02-03T15:42:52.750277Z\"\n}\n Here we can see in the response above under the output section that our new contract address is 0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1. This is the address that we will reference in the rest of this guide.
If you have an Ethereum ABI for an existing smart contract, there is an HTTP endpoint on the FireFly API that will take the ABI as input and automatically generate the FireFly Interface for you. Rather than handcrafting our FFI, we'll let FireFly generate it for us using that endpoint now.
"},{"location":"tutorials/custom_contracts/ethereum/#request_1","title":"Request","text":"Here we will take the JSON ABI generated by truffle or solc and POST that to FireFly to have it automatically generate the FireFly Interface for us. Copy the abi from the compiled JSON file, and put that inside an input object like the example below:
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces/generate
{\n \"input\": {\n \"abi\": [\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n },\n {\n \"indexed\": false,\n \"internalType\": \"uint256\",\n \"name\": \"value\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"Changed\",\n \"type\": \"event\"\n },\n {\n \"inputs\": [],\n \"name\": \"get\",\n \"outputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n }\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"newValue\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"set\",\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n }\n ]\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_1","title":"Response","text":"FireFly generates and returns the the full FireFly Interface for the SimpleStorage contract in the response body:
{\n \"namespace\": \"default\",\n \"name\": \"\",\n \"description\": \"\",\n \"version\": \"\",\n \"methods\": [\n {\n \"name\": \"get\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n {\n \"name\": \"set\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#broadcast-the-contract-interface","title":"Broadcast the contract interface","text":"Now that we have a FireFly Interface representation of our smart contract, we want to broadcast that to the entire network. This broadcast will be pinned to the blockchain, so we can always refer to this specific name and version, and everyone in the network will know exactly which contract interface we are talking about.
We will take the output from the previous HTTP response above, fill in the name and version and then POST that to the /contracts/interfaces API endpoint.
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces?publish=true
NOTE: Without passing the query parameter publish=true when the interface is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the interface, a subsequent API call would need to be made to /contracts/interfaces/{name}/{version}/publish
{\n \"namespace\": \"default\",\n \"name\": \"SimpleStorage\",\n \"version\": \"v1.0.0\",\n \"description\": \"\",\n \"methods\": [\n {\n \"name\": \"get\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n {\n \"name\": \"set\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_2","title":"Response","text":"{\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"message\": \"3cd0dde2-1e39-4c9e-a4a1-569e87cca93a\",\n \"namespace\": \"default\",\n \"name\": \"SimpleStorage\",\n \"description\": \"\",\n \"version\": \"v1.0.0\",\n \"methods\": [\n {\n \"id\": \"56467890-5713-4463-84b8-4537fcb63d8b\",\n \"contract\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"name\": \"get\",\n \"namespace\": \"default\",\n \"pathname\": \"get\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n {\n \"id\": \"6b254d1d-5f5f-491e-bbd2-201e96892e1a\",\n \"contract\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"pathname\": \"set\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"id\": \"aa1fe67b-b2ac-41af-a7e7-7ad54a30a78d\",\n \"contract\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"namespace\": \"default\",\n \"pathname\": \"Changed\",\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n }\n ],\n \"published\": true\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#create-an-http-api-for-the-contract","title":"Create an HTTP API for the contract","text":"Now comes the fun part where we see some of the powerful, developer-friendly features of FireFly. The next thing we're going to do is tell FireFly to build an HTTP API for this smart contract, complete with an OpenAPI Specification and Swagger UI. As part of this, we'll also tell FireFly where the contract is on the blockchain. Like the interface broadcast above, this will also generate a broadcast which will be pinned to the blockchain so all the members of the network will be aware of and able to interact with this API.
We need to copy the id field we got in the response from the previous step to the interface.id field in the request body below. We will also pick a name that will be part of the URL for our HTTP API, so be sure to pick a name that is URL friendly. In this case we'll call it simple-storage. Lastly, in the location.address field, we're telling FireFly where an instance of the contract is deployed on-chain.
NOTE: The location field is optional here, but if it is omitted, it will be required in every request to invoke or query the contract. This can be useful if you have multiple instances of the same contract deployed to different addresses.
POST http://localhost:5000/api/v1/namespaces/default/apis?publish=true
NOTE: Without passing the query parameter publish=true when the API is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the API, a subsequent API call would need to be made to /apis/{apiName}/publish
{\n \"name\": \"simple-storage\",\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_3","title":"Response","text":"{\n \"id\": \"9a681ec6-1dee-42a0-b91b-61d23a814b0f\",\n \"namespace\": \"default\",\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"name\": \"simple-storage\",\n \"message\": \"d90d0386-8874-43fb-b7d3-485c22f35f47\",\n \"urls\": {\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api\"\n },\n \"published\": true\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#view-openapi-spec-for-the-contract","title":"View OpenAPI spec for the contract","text":"You'll notice in the response body that there are a couple of URLs near the bottom. If you navigate to the one labeled ui in your browser, you should see the Swagger UI for your smart contract.
Now that we've got everything set up, it's time to use our smart contract! We're going to make a POST request to the invoke/set endpoint to set the integer value on-chain. Let's set it to the value of 3 right now.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/invoke/set
{\n \"input\": {\n \"newValue\": 3\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_4","title":"Response","text":"{\n \"id\": \"41c67c63-52cf-47ce-8a59-895fe2ffdc86\"\n}\n You'll notice that we just get an ID back here, and that's expected due to the asynchronous programming model of working with smart contracts in FireFly. To see what the value is now, we can query the smart contract. In a little bit, we'll also subscribe to the events emitted by this contract so we can know when the value is updated in realtime.
"},{"location":"tutorials/custom_contracts/ethereum/#query-the-current-value","title":"Query the current value","text":"To make a read-only request to the blockchain to check the current value of the stored integer, we can make a POST to the query/get endpoint.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/query/get
{}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_5","title":"Response","text":"{\n \"output\": \"3\"\n}\n NOTE: Some contracts may have queries that require input parameters. That's why the query endpoint is a POST, rather than a GET so that parameters can be passed as JSON in the request body. This particular function does not have any parameters, so we just pass an empty JSON object.
Some smart contract functions may accept or require additional options to be passed with the request. For example, a Solidity function might be payable, meaning that a value field must be specified, indicating an amount of ETH to be transferred with the request. Each of your smart contract API's /invoke or /query endpoints support an options object in addition to the input arguments for the function itself.
Here is an example of sending 100 wei with a transaction:
"},{"location":"tutorials/custom_contracts/ethereum/#request_6","title":"Request","text":"POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/invoke/set
{\n \"input\": {\n \"newValue\": 3\n },\n \"options\": {\n \"value\": 100\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_6","title":"Response","text":"{\n \"id\": \"41c67c63-52cf-47ce-8a59-895fe2ffdc86\"\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#create-a-blockchain-event-listener","title":"Create a blockchain event listener","text":"Now that we've seen how to submit transactions and preform read-only queries to the blockchain, let's look at how to receive blockchain events so we know when things are happening in realtime.
If you look at the source code for the smart contract we're working with above, you'll notice that it emits an event when the stored value of the integer is set. In order to receive these events, we first need to instruct FireFly to listen for this specific type of blockchain event. To do this, we create an Event Listener. The /contracts/listeners endpoint is RESTful so there are POST, GET, and DELETE methods available on it. To create a new listener, we will make a POST request. We are going to tell FireFly to listen to events with name \"Changed\" from the FireFly Interface we defined earlier, referenced by its ID. We will also tell FireFly which contract address we expect to emit these events, and the topic to assign these events to. You can specify multiple filters for a listener, in this case we only specify one for our event. Topics are a way for applications to subscribe to events they are interested in.
POST http://localhost:5000/api/v1/namespaces/default/contracts/listeners
{\n \"filters\": [\n {\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"eventPath\": \"Changed\"\n }\n ],\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_7","title":"Response","text":"{\n \"id\": \"e7c8457f-4ffd-42eb-ac11-4ad8aed30de1\",\n \"interface\": {\n \"id\": \"55fdb62a-fefc-4313-99e4-e3f95fcca5f0\"\n },\n \"namespace\": \"default\",\n \"name\": \"019104d7-bb0a-c008-76a9-8cb923d91b37\",\n \"backendId\": \"019104d7-bb0a-c008-76a9-8cb923d91b37\",\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"created\": \"2024-07-30T18:12:12.704964Z\",\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"signature\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1:Changed(address,uint256) [i=0]\",\n \"topic\": \"simple-storage\",\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"filters\": [\n {\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"interface\": {\n \"id\": \"55fdb62a-fefc-4313-99e4-e3f95fcca5f0\"\n },\n \"signature\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1:Changed(address,uint256) [i=0]\"\n }\n ]\n}\n We can see in the response, that FireFly pulls all the schema information from the FireFly Interface that we broadcasted earlier and creates the listener with that schema. This is useful so that we don't have to enter all of that data again.
"},{"location":"tutorials/custom_contracts/ethereum/#querying-listener-status","title":"Querying listener status","text":"If you are interested in learning about the current state of a listener you have created, you can query with the fetchstatus parameter. For FireFly stacks with an EVM compatible blockchain connector, the response will include checkpoint information and if the listener is currently in catchup mode.
GET http://localhost:5000/api/v1/namespaces/default/contracts/listeners/1bfa3b0f-3d90-403e-94a4-af978d8c5b14?fetchstatus
{\n \"id\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\",\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"namespace\": \"default\",\n \"name\": \"sb-66209ffc-d355-4ac0-7151-bc82490ca9df\",\n \"protocolId\": \"sb-66209ffc-d355-4ac0-7151-bc82490ca9df\",\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"created\": \"2022-02-17T22:02:36.34549538Z\",\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"status\": {\n \"checkpoint\": {\n \"block\": 0,\n \"transactionIndex\": -1,\n \"logIndex\": -1\n },\n \"catchup\": true\n },\n \"options\": {\n \"firstEvent\": \"oldest\"\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#subscribe-to-events-from-our-contract","title":"Subscribe to events from our contract","text":"Now that we've told FireFly that it should listen for specific events on the blockchain, we can set up a Subscription for FireFly to send events to our app. To set up our subscription, we will make a POST to the /subscriptions endpoint.
We will set a friendly name simple-storage to identify the Subscription when we are connecting to it in the next step.
We're also going to set up a filter to only send events blockchain events from our listener that we created in the previous step. To do that, we'll copy the listener ID from the step above (1bfa3b0f-3d90-403e-94a4-af978d8c5b14) and set that as the value of the listener field in the example below:
POST http://localhost:5000/api/v1/namespaces/default/subscriptions
{\n \"namespace\": \"default\",\n \"name\": \"simple-storage\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"blockchainevent\": {\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\"\n }\n },\n \"options\": {\n \"firstEvent\": \"oldest\"\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_8","title":"Response","text":"{\n \"id\": \"f826269c-65ed-4634-b24c-4f399ec53a32\",\n \"namespace\": \"default\",\n \"name\": \"simple-storage\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\"\n }\n },\n \"options\": {\n \"firstEvent\": \"-1\",\n \"withData\": false\n },\n \"created\": \"2022-03-15T17:35:30.131698921Z\",\n \"updated\": null\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#receive-custom-smart-contract-events","title":"Receive custom smart contract events","text":"The last step is to connect a WebSocket client to FireFly to receive the event. You can use any WebSocket client you like, such as Postman or a command line app like websocat.
Connect your WebSocket client to ws://localhost:5000/ws.
After connecting the WebSocket client, send a message to tell FireFly to:
simple-storagedefault namespace{\n \"type\": \"start\",\n \"name\": \"simple-storage\",\n \"namespace\": \"default\",\n \"autoack\": true\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#websocket-event","title":"WebSocket event","text":"After creating the subscription, you should see an event arrive on the connected WebSocket client that looks something like this:
{\n \"id\": \"0f4a31d6-9743-4537-82df-5a9c76ccbd1e\",\n \"sequence\": 24,\n \"type\": \"blockchain_event_received\",\n \"namespace\": \"default\",\n \"reference\": \"dd3e1554-c832-47a8-898e-f1ee406bea41\",\n \"created\": \"2022-03-15T17:32:27.824417878Z\",\n \"blockchainevent\": {\n \"id\": \"dd3e1554-c832-47a8-898e-f1ee406bea41\",\n \"sequence\": 7,\n \"source\": \"ethereum\",\n \"namespace\": \"default\",\n \"name\": \"Changed\",\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\",\n \"protocolId\": \"000000000010/000000/000000\",\n \"output\": {\n \"from\": \"0xb7e6a5eb07a75a2c81801a157192a82bcbce0f21\",\n \"value\": \"3\"\n },\n \"info\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\",\n \"blockNumber\": \"10\",\n \"logIndex\": \"0\",\n \"signature\": \"Changed(address,uint256)\",\n \"subId\": \"sb-724b8416-786d-4e67-4cd3-5bae4a26eb0e\",\n \"timestamp\": \"1647365460\",\n \"transactionHash\": \"0xd5b5c716554097b2868d8705241bb2189bb76d16300f702ad05b0b02fccc4afb\",\n \"transactionIndex\": \"0x0\"\n },\n \"timestamp\": \"2022-03-15T17:31:00Z\",\n \"tx\": {\n \"type\": \"\"\n }\n },\n \"subscription\": {\n \"id\": \"f826269c-65ed-4634-b24c-4f399ec53a32\",\n \"namespace\": \"default\",\n \"name\": \"simple-storage\"\n }\n}\n You can see in the event received over the WebSocket connection, the blockchain event that was emitted from our first transaction, which happened in the past. We received this event, because when we set up both the Listener, and the Subscription, we specified the \"firstEvent\" as \"oldest\". This tells FireFly to look for this event from the beginning of the blockchain, and that your app is interested in FireFly events since the beginning of FireFly's event history.
In the event, we can also see the blockchainevent itself, which has an output object. These are the params in our FireFly Interface, and the actual output of the event. Here we can see the value is 3 which is what we set the integer to in our original transaction.
If you query by the ID of your subscription with the fetchstatus parameter, you can see its current offset.
GET http://localhost:5000/api/v1/namespaces/default/subscriptions/f826269c-65ed-4634-b24c-4f399ec53a32
{\n \"id\": \"f826269c-65ed-4634-b24c-4f399ec53a32\",\n \"namespace\": \"default\",\n \"name\": \"simple-storage\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\"\n }\n },\n \"options\": {\n \"firstEvent\": \"-1\",\n \"withData\": false\n },\n \"status\": {\n \"offset\": 20\n }\n \"created\": \"2022-03-15T17:35:30.131698921Z\",\n \"updated\": null\n}\n You've reached the end of the main guide to working with custom smart contracts in FireFly. Hopefully this was helpful and gives you what you need to get up and running with your own contracts. There are several additional ways to invoke or query smart contracts detailed below, so feel free to keep reading if you're curious.
"},{"location":"tutorials/custom_contracts/ethereum/#appendix-i-work-with-a-custom-contract-without-creating-a-named-api","title":"Appendix I: Work with a custom contract without creating a named API","text":"FireFly aims to offer a developer-friendly and flexible approach to using custom smart contracts. The guide above has detailed the most robust and feature-rich way to use custom contracts with FireFly, but there are several alternative API usage patterns available as well.
It is possible to broadcast a contract interface and use a smart contract that implements that interface without also broadcasting a named API as above. There are several key differences (which may or may not be desirable) compared to the method outlined in the full guide above:
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces/8bdd27a5-67c1-4960-8d1e-7aa31b9084d3/invoke/set
{\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"input\": {\n \"newValue\": 7\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_9","title":"Response","text":"{\n \"id\": \"f310fa4a-73d8-4777-9f9d-dfa5012a052f\"\n}\n All of the same invoke, query, and subscribe endpoints are available on the contract interface itself.
"},{"location":"tutorials/custom_contracts/ethereum/#appendix-ii-work-directly-with-contracts-with-inline-requests","title":"Appendix II: Work directly with contracts with inline requests","text":"The final way of working with custom smart contracts with FireFly is to just put everything FireFly needs all in one request, each time a contract is invoked or queried. This is the most lightweight, but least feature-rich way of using a custom contract.
To do this, we will need to put both the contract location, and a subset of the FireFly Interface that describes the method we want to invoke in the request body, in addition to the function input.
"},{"location":"tutorials/custom_contracts/ethereum/#request_10","title":"Request","text":"POST http://localhost:5000/api/v1/namespaces/default/contracts/invoke
{\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"method\": {\n \"name\": \"set\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n },\n \"input\": {\n \"x\": 42\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_10","title":"Response","text":"{\n \"id\": \"386d3e23-e4bc-4a9b-bc1f-452f0a8c9ae5\"\n}\n"},{"location":"tutorials/custom_contracts/fabric/","title":"Work with Hyperledger Fabric chaincodes","text":"This guide describes the steps to deploy a chaincode to a Hyperledger Fabric blockchain and use FireFly to interact with it in order to submit transactions, query for states and listening for events.
NOTE: This guide assumes that you are running a local FireFly stack with at least 2 members and a Fabric blockchain created by the FireFly CLI. If you need help getting that set up, please see the Getting Started guide to Start your environment.
"},{"location":"tutorials/custom_contracts/fabric/#example-smart-contract","title":"Example smart contract","text":"For this tutorial, we will be using a well known, but slightly modified smart contract called asset_transfer. It's based on the asset-transfer-basic chaincode in the fabric-samples project. Check out the code repository and use the source code provided below to replace part of the content of the file fabric-samples/asset-transfer-basic/chaincode-go/chaincode/smartcontract.go.
Find the following return statement in the function CreateAsset:
return ctx.GetStub().PutState(id, assetJSON)\n and replace it with the following, so that an event will be emitted when the transaction is committed to the channel ledger:
err = ctx.GetStub().PutState(id, assetJSON)\n if err != nil {\n return err\n }\n return ctx.GetStub().SetEvent(\"AssetCreated\", assetJSON)\n"},{"location":"tutorials/custom_contracts/fabric/#create-the-chaincode-package","title":"Create the chaincode package","text":"Use the peer command to create the chaincode package for deployment. You can download the peer binary from the releases page of the Fabric project or build it from source.
~ johndoe$ cd fabric-samples/asset-transfer-basic/chaincode-go\n chaincode-go johndoe$ touch core.yaml\n chaincode-go johndoe$ peer lifecycle chaincode package -p . --label asset_transfer ./asset_transfer.zip\n The peer command requires an empty core.yaml file to be present in the working directory to perform the packaging. That's what touch core.yaml did above
The resulting asset_transfer.zip archive file will be used in the next step to deploy to the Fabric network used in FireFly.
Deployment of smart contracts is not currently within the scope of responsibility for FireFly. You can use your standard blockchain specific tools to deploy your contract to the blockchain you are using.
The FireFly CLI provides a convenient function to deploy a chaincode package to a local FireFly stack.
NOTE: The contract deployment function of the FireFly CLI is a convenience function to speed up local development, and not intended for production applications
~ johndoe$ ff help deploy fabric\nDeploy a packaged chaincode to the Fabric network used by a FireFly stack\n\nUsage:\n ff deploy fabric <stack_name> <chaincode_package> <channel> <chaincodeName> <version> [flags]\n Notice the various parameters used by the command ff deploy fabric. We'll tell the FireFly to deploy using the following parameter values, if your stack setup is different, update the command accordingly:
devfirefly (this is the channel that is created by the FireFly CLI when bootstrapping the stack, replace if you use a different channel in your setup)asset_transfer (must match the value of the --label parameter when creating the chaincode package)1.0$ ff deploy fabric dev asset_transfer.zip firefly asset_transfer 1.0\ninstalling chaincode\nquerying installed chaincode\napproving chaincode\ncommitting chaincode\n{\n \"chaincode\": \"asset_transfer\",\n \"channel\": \"firefly\"\n}\n"},{"location":"tutorials/custom_contracts/fabric/#the-firefly-interface-format","title":"The FireFly Interface Format","text":"In order to teach FireFly how to interact with the chaincode, a FireFly Interface (FFI) document is needed. While Ethereum (or other EVM based blockchains) requires an Application Binary Interface (ABI) to govern the interaction between the client and the smart contract, which is specific to each smart contract interface design, Fabric defines a generic chaincode interface and leaves the encoding and decoding of the parameter values to the discretion of the chaincode developer.
As a result, the FFI document for a Fabric chaincode must be hand-crafted. The following FFI sample demonstrates the specification for the following common cases:
CreateAsset input parametersGetAllAssets outputAssetCreated properties{\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"description\": \"Spec interface for the asset-transfer-basic golang chaincode\",\n \"version\": \"1.0\",\n \"methods\": [\n {\n \"name\": \"GetAllAssets\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"object\",\n \"properties\": {\n \"type\": \"string\"\n }\n }\n }\n }\n ]\n },\n {\n \"name\": \"CreateAsset\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"AssetCreated\"\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/fabric/#input-parameters","title":"Input parameters","text":"For the params section of the CreateAsset function, it is critical that the sequence of the properties (id, color, size, owner, value) matches the order of the input parameters in the chaincode's function signature:
func CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error\n"},{"location":"tutorials/custom_contracts/fabric/#return-values","title":"Return values","text":"FireFly can automatically decode JSON payloads in the return values. That's why the returns section of the GetAllAssets function only needs to specify the type as array of objects, without having to specify the detailed structure of the JSON payload.
On the other hand, if certain properties of the returned value are to be hidden, then you can provide a detailed structure of the JSON object with the desired properties. This is demonstrated in the JSON structure for the event payload, see below, where the property AppraisedValue is omitted from the output.
For events, FireFly automatically decodes JSON payloads. If the event payload is not JSON, base64 encoded bytes will be returned instead. For the events section of the FFI, only the name property needs to be specified.
Now that we have a FireFly Interface representation of our chaincode, we want to broadcast that to the entire network. This broadcast will be pinned to the blockchain, so we can always refer to this specific name and version, and everyone in the network will know exactly which contract interface we are talking about.
We will use the FFI JSON constructed above and POST that to the /contracts/interfaces API endpoint.
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces?publish=true
NOTE: Without passing the query parameter publish=true when the interface is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the interface, a subsequent API call would need to be made to /contracts/interfaces/{name}/{version}/publish
{\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"description\": \"Spec interface for the asset-transfer-basic golang chaincode\",\n \"version\": \"1.0\",\n \"methods\": [\n {\n \"name\": \"GetAllAssets\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"object\",\n \"properties\": {\n \"type\": \"string\"\n }\n }\n }\n }\n ]\n },\n {\n \"name\": \"CreateAsset\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"AssetCreated\"\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response","title":"Response","text":"{\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"message\": \"8a01fc83-5729-418b-9706-6fc17c8d2aac\",\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"description\": \"Spec interface for the asset-transfer-basic golang chaincode\",\n \"version\": \"1.1\",\n \"methods\": [\n {\n \"id\": \"b31e3623-35e8-4918-bf8c-1b0d6c01de25\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"name\": \"GetAllAssets\",\n \"namespace\": \"default\",\n \"pathname\": \"GetAllAssets\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"object\",\n \"properties\": {\n \"type\": \"string\"\n }\n }\n }\n }\n ]\n },\n {\n \"id\": \"e5a170d1-0be1-4697-800b-f4bcfaf71cf6\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"name\": \"CreateAsset\",\n \"namespace\": \"default\",\n \"pathname\": \"CreateAsset\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"id\": \"27564533-30bd-4536-884e-02e5d79ec238\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"namespace\": \"default\",\n \"pathname\": \"AssetCreated\",\n \"signature\": \"\",\n \"name\": \"AssetCreated\",\n \"description\": \"\",\n \"params\": null\n }\n ]\n}\n NOTE: We can broadcast this contract interface conveniently with the help of FireFly Sandbox running at http://127.0.0.1:5108
Contracts SectionDefine a Contract InterfaceFFI - FireFly Interface in the Interface Fromat dropdownFFI JSON crafted by you into the Schema FieldRunNow comes the fun part where we see some of the powerful, developer-friendly features of FireFly. The next thing we're going to do is tell FireFly to build an HTTP API for this chaincode, complete with an OpenAPI Specification and Swagger UI. As part of this, we'll also tell FireFly where the chaincode is on the blockchain.
Like the interface broadcast above, this will also generate a broadcast which will be pinned to the blockchain so all the members of the network will be aware of and able to interact with this API.
We need to copy the id field we got in the response from the previous step to the interface.id field in the request body below. We will also pick a name that will be part of the URL for our HTTP API, so be sure to pick a name that is URL friendly. In this case we'll call it asset_transfer. Lastly, in the location field, we're telling FireFly where an instance of the chaincode is deployed on-chain, which is a chaincode named asset_transfer in the channel firefly.
NOTE: The location field is optional here, but if it is omitted, it will be required in every request to invoke or query the chaincode. This can be useful if you have multiple instances of the same chaincode deployed to different channels.
POST http://localhost:5000/api/v1/namespaces/default/apis?publish=true
NOTE: Without passing the query parameter publish=true when the API is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the API, a subsequent API call would need to be made to /apis/{apiName}/publish
{\n \"name\": \"asset_transfer\",\n \"interface\": {\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\"\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n }\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_1","title":"Response","text":"{\n \"id\": \"a9a9ab4e-2544-45d5-8824-3c05074fbf75\",\n \"namespace\": \"default\",\n \"interface\": {\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\"\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"name\": \"asset_transfer\",\n \"message\": \"5f1556a1-5cb1-4bc6-8611-d8f88ccf9c30\",\n \"urls\": {\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/asset_transfer/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/asset_transfer/api\"\n }\n}\n NOTE: We can create this Http API conveniently with the help of FireFly Sandbox running at http://127.0.0.1:5108
Contracts SectionRegister a Contract APIContract Interface dropdownName Field, give a name that will be part of the URL for your Http APIChaincode Field, give your chaincode name for which you wrote the FFIChannel Field, give the channel name where your chaincode is deployedRunYou'll notice in the response body that there are a couple of URLs near the bottom. If you navigate to the one labeled ui in your browser, you should see the Swagger UI for your chaincode.
The /invoke endpoints in the generated API are for submitting transactions. These endpoints will be mapped to the POST /transactions endpoint of the FabConnect API.
The /query endpoints in the generated API, on the other hand, are for sending query requests. These endpoints will be mapped to the POST /query endpoint of the Fabconnect API, which under the cover only sends chaincode endorsement requests to the target peer node without sending a trasaction payload to the orderer node.
Now that we've got everything set up, it's time to use our chaincode! We're going to make a POST request to the invoke/CreateAsset endpoint to create a new asset.
POST http://localhost:5000/api/v1/namespaces/default/apis/asset_transfer/invoke/CreateAsset
{\n \"input\": {\n \"color\": \"blue\",\n \"id\": \"asset-01\",\n \"owner\": \"Harry\",\n \"size\": \"30\",\n \"value\": \"23400\"\n }\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_2","title":"Response","text":"{\n \"id\": \"b8e905cc-bc23-434a-af7d-13c6d85ae545\",\n \"namespace\": \"default\",\n \"tx\": \"79d2668e-4626-4634-9448-1b40fa0d9dfd\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Pending\",\n \"plugin\": \"fabric\",\n \"input\": {\n \"input\": {\n \"color\": \"blue\",\n \"id\": \"asset-02\",\n \"owner\": \"Harry\",\n \"size\": \"30\",\n \"value\": \"23400\"\n },\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"key\": \"Org1MSP::x509::CN=org_0,OU=client::CN=fabric_ca.org1.example.com,OU=Hyperledger FireFly,O=org1.example.com,L=Raleigh,ST=North Carolina,C=US\",\n \"location\": {\n \"chaincode\": \"asset_transfer\",\n \"channel\": \"firefly\"\n },\n \"method\": {\n \"description\": \"\",\n \"id\": \"e5a170d1-0be1-4697-800b-f4bcfaf71cf6\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"name\": \"CreateAsset\",\n \"namespace\": \"default\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"pathname\": \"CreateAsset\",\n \"returns\": []\n },\n \"methodPath\": \"CreateAsset\",\n \"type\": \"invoke\"\n },\n \"created\": \"2022-05-02T17:08:40.811630044Z\",\n \"updated\": \"2022-05-02T17:08:40.811630044Z\"\n}\n You'll notice that we got an ID back with status Pending, and that's expected due to the asynchronous programming model of working with custom onchain logic in FireFly. To see what the latest state is now, we can query the chaincode. In a little bit, we'll also subscribe to the events emitted by this chaincode so we can know when the state is updated in realtime.
To make a read-only request to the blockchain to check the current list of assets, we can make a POST to the query/GetAllAssets endpoint.
POST http://localhost:5000/api/v1/namespaces/default/apis/asset_transfer/query/GetAllAssets
{}\n"},{"location":"tutorials/custom_contracts/fabric/#response_3","title":"Response","text":"[\n {\n \"AppraisedValue\": 23400,\n \"Color\": \"blue\",\n \"ID\": \"asset-01\",\n \"Owner\": \"Harry\",\n \"Size\": 30\n }\n]\n NOTE: Some chaincodes may have queries that require input parameters. That's why the query endpoint is a POST, rather than a GET so that parameters can be passed as JSON in the request body. This particular function does not have any parameters, so we just pass an empty JSON object.
Now that we've seen how to submit transactions and preform read-only queries to the blockchain, let's look at how to receive blockchain events so we know when things are happening in realtime.
If you look at the source code for the smart contract we're working with above, you'll notice that it emits an event when a new asset is created. In order to receive these events, we first need to instruct FireFly to listen for this specific type of blockchain event. To do this, we create an Event Listener.
The /contracts/listeners endpoint is RESTful so there are POST, GET, and DELETE methods available on it. To create a new listener, we will make a POST request. We are going to tell FireFly to listen to events with name \"AssetCreated\" from the FireFly Interface we defined earlier, referenced by its ID. We will also tell FireFly which channel and chaincode we expect to emit these events.
POST http://localhost:5000/api/v1/namespaces/default/contracts/listeners
{\n \"filters\": [\n {\n \"interface\": {\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\"\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"event\": {\n \"name\": \"AssetCreated\"\n }\n }\n ],\n \"options\": {\n \"firstEvent\": \"oldest\"\n },\n \"topic\": \"assets\"\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_4","title":"Response","text":"{\n \"id\": \"d6b5e774-c9e5-474c-9495-ec07fa47a907\",\n \"namespace\": \"default\",\n \"name\": \"sb-44aa348a-bafb-4243-594e-dcad689f1032\",\n \"backendId\": \"sb-44aa348a-bafb-4243-594e-dcad689f1032\",\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"created\": \"2024-07-22T15:36:58.514085959Z\",\n \"event\": {\n \"name\": \"AssetCreated\",\n \"description\": \"\",\n \"params\": null\n },\n \"signature\": \"firefly-asset_transfer:AssetCreated\",\n \"topic\": \"assets\",\n \"options\": {\n \"firstEvent\": \"oldest\"\n },\n \"filters\": [\n {\n \"event\": {\n \"name\": \"AssetCreated\",\n \"description\": \"\",\n \"params\": null\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"signature\": \"firefly-asset_transfer:AssetCreated\"\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/fabric/#subscribe-to-events-from-our-contract","title":"Subscribe to events from our contract","text":"Now that we've told FireFly that it should listen for specific events on the blockchain, we can set up a Subscription for FireFly to send events to our client app. To set up our subscription, we will make a POST to the /subscriptions endpoint.
We will set a friendly name asset_transfer to identify the Subscription when we are connecting to it in the next step.
We're also going to set up a filter to only send events blockchain events from our listener that we created in the previous step. To do that, we'll copy the listener ID from the step above (6e7f5dd8-5a57-4163-a1d2-5654e784dc31) and set that as the value of the listener field in the example below:
POST http://localhost:5000/api/v1/namespaces/default/subscriptions
{\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"blockchainevent\": {\n \"listener\": \"6e7f5dd8-5a57-4163-a1d2-5654e784dc31\"\n }\n },\n \"options\": {\n \"firstEvent\": \"oldest\"\n }\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_5","title":"Response","text":"{\n \"id\": \"06d18b49-e763-4f5c-9e97-c25024fe57c8\",\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {\n \"listener\": \"6e7f5dd8-5a57-4163-a1d2-5654e784dc31\"\n }\n },\n \"options\": {\n \"firstEvent\": \"-1\",\n \"withData\": false\n },\n \"created\": \"2022-05-02T17:22:06.480181291Z\",\n \"updated\": null\n}\n"},{"location":"tutorials/custom_contracts/fabric/#receive-custom-smart-contract-events","title":"Receive custom smart contract events","text":"The last step is to connect a WebSocket client to FireFly to receive the event. You can use any WebSocket client you like, such as Postman or a command line app like websocat.
Connect your WebSocket client to ws://localhost:5000/ws.
After connecting the WebSocket client, send a message to tell FireFly to:
asset_transferdefault namespace{\n \"type\": \"start\",\n \"name\": \"asset_transfer\",\n \"namespace\": \"default\",\n \"autoack\": true\n}\n"},{"location":"tutorials/custom_contracts/fabric/#websocket-event","title":"WebSocket event","text":"After creating the subscription, you should see an event arrive on the connected WebSocket client that looks something like this:
{\n \"id\": \"d9fb86b2-b25b-43b8-80d3-936c5daa5a66\",\n \"sequence\": 29,\n \"type\": \"blockchain_event_received\",\n \"namespace\": \"default\",\n \"reference\": \"e0d670b4-a1b6-4efd-a985-06dfaaa58fe3\",\n \"topic\": \"assets\",\n \"created\": \"2022-05-02T17:26:57.57612001Z\",\n \"blockchainEvent\": {\n \"id\": \"e0d670b4-a1b6-4efd-a985-06dfaaa58fe3\",\n \"source\": \"fabric\",\n \"namespace\": \"default\",\n \"name\": \"AssetCreated\",\n \"listener\": \"6e7f5dd8-5a57-4163-a1d2-5654e784dc31\",\n \"protocolId\": \"000000000015/000000/000000\",\n \"output\": {\n \"AppraisedValue\": 12300,\n \"Color\": \"red\",\n \"ID\": \"asset-01\",\n \"Owner\": \"Jerry\",\n \"Size\": 10\n },\n \"info\": {\n \"blockNumber\": 15,\n \"chaincodeId\": \"asset_transfer\",\n \"eventIndex\": 0,\n \"eventName\": \"AssetCreated\",\n \"subId\": \"sb-2cac2bfa-38af-4408-4ff3-973421410e5d\",\n \"timestamp\": 1651512414920972300,\n \"transactionId\": \"172637bf59a3520ca6dd02f716e1043ba080e10e1cd2f98b4e6b85abcc6a6d69\",\n \"transactionIndex\": 0\n },\n \"timestamp\": \"2022-05-02T17:26:54.9209723Z\",\n \"tx\": {\n \"type\": \"\",\n \"blockchainId\": \"172637bf59a3520ca6dd02f716e1043ba080e10e1cd2f98b4e6b85abcc6a6d69\"\n }\n },\n \"subscription\": {\n \"id\": \"06d18b49-e763-4f5c-9e97-c25024fe57c8\",\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\"\n }\n}\n You can see in the event received over the WebSocket connection, the blockchain event that was emitted from our first transaction, which happened in the past. We received this event, because when we set up both the Listener, and the Subscription, we specified the \"firstEvent\" as \"oldest\". This tells FireFly to look for this event from the beginning of the blockchain, and that your app is interested in FireFly events since the beginning of FireFly's event history.
In the event, we can also see the blockchainEvent itself, which has an output object. This contains the event payload that was set by the chaincode.
This guide describes how to associate an arbitrary off-chain payload with a blockchain transaction on a contract of your own design. A hash of the payload will be recorded as part of the blockchain transaction, and on the receiving side, FireFly will ensure that both the on-chain and off-chain pieces are received and aggregated together.
NOTE: This is an advanced FireFly feature. Before following any of the steps in this guide, you should be very familiar and comfortable with the basic features of how broadcast messages and private messages work, be proficient at custom contract development on your blockchain of choice, and understand the fundamentals of how FireFly interacts with custom contracts.
"},{"location":"tutorials/custom_contracts/pinning/#designing-a-compatible-contract","title":"Designing a compatible contract","text":"In order to allow pinning a FireFly message batch with a custom contract transaction, your contract must meet certain criteria.
First, any external method of the contract that will be used for associating with off-chain payloads must provide an extra parameter for passing the encoded batch data. This must be the last parameter in the method signature. This convention is chosen partly to align with the Ethereum ERC5750 standard, but should serve as a straightforward guideline for nearly any blockchain.
Second, this method must emit a BatchPin event that can be received and parsed by FireFly. Exactly how the data is unpacked and used to emit this event will differ for each blockchain.
import \"@hyperledger/firefly-contracts/contracts/IBatchPin.sol\";\n\ncontract CustomPin {\n IBatchPin firefly;\n\n function setFireFlyAddress(address addr) external {\n firefly = IBatchPin(addr);\n }\n\n function sayHello(bytes calldata data) external {\n require(\n address(firefly) != address(0),\n \"CustomPin: FireFly address has not been set\"\n );\n\n /* do custom things */\n\n firefly.pinBatchData(data);\n }\n}\n bytes). The method must invoke the pinBatchData method of the FireFly Multiparty Contract and pass along this data payload. It is generally good practice to trigger this as a final step before returning, after the method has performed its own logic./status API (under multiparty.contract.location as of FireFly v1.1.0). However, the application must also consider how appropriately secure this functionality, and how to update this location if a multiparty \"network action\" is used to migrate the network onto a new FireFly multiparty contract.package chaincode\n\nimport (\n \"encoding/json\"\n \"fmt\"\n\n \"github.com/hyperledger/fabric-contract-api-go/contractapi\"\n \"github.com/hyperledger/firefly/custompin_sample/batchpin\"\n)\n\ntype SmartContract struct {\n contractapi.Contract\n}\n\nfunc (s *SmartContract) MyCustomPin(ctx contractapi.TransactionContextInterface, data string) error {\n event, err := batchpin.BuildEventFromString(ctx, data)\n if err != nil {\n return err\n }\n bytes, err := json.Marshal(event)\n if err != nil {\n return fmt.Errorf(\"failed to marshal event: %s\", err)\n }\n return ctx.GetStub().SetEvent(\"BatchPin\", bytes)\n}\n string). The method must unpack this argument into a JSON object.BatchPin event in the same format that is used by the FireFly Multiparty Contract.Once you have a contract designed, you can initialize your environment using the blockchain of your choice.
No special initialization arguments are needed for Ethereum.
If you are using Fabric, you must pass the --custom-pin-support argument when initializing your FireFly stack. This will ensure that the BatchPin event listener listens to events from all chaincode deployed on the default channel, instead of only listening to events from the pre-deployed FireFly chaincode.
You can follow the normal steps for Ethereum or Fabric to define your contract interface and API in FireFly. When invoking the contract, you can include a message payload alongside the other parameters.
POST http://localhost:5000/api/v1/namespaces/default/apis/custom-pin/invoke/sayHello
{\n \"input\": {},\n \"message\": {\n \"data\": [\n {\n \"value\": \"payload here\"\n }\n ]\n }\n}\n"},{"location":"tutorials/custom_contracts/pinning/#listening-for-events","title":"Listening for events","text":"All parties that receive the message will receive a message_confirmed on their event listeners. This event confirms that the off-chain payload has been received (via data exchange or shared storage) and that the blockchain transaction has been received and sequenced. It is guaranteed that these message_confirmed events will be ordered based on the sequence of the on-chain transactions, regardless of when the off-chain payload becomes available. This means that all parties will order messages on a given topic in exactly the same order, allowing for deterministic but decentralized event-driven architecture.
This guide describes the steps to deploy a smart contract to a Tezos blockchain and use FireFly to interact with it in order to submit transactions, query for states and listening for events.
"},{"location":"tutorials/custom_contracts/tezos/#smart-contract-languages","title":"Smart Contract Languages","text":"Smart contracts on Tezos can be programmed using familiar, developer-friendly languages. All features available on Tezos can be written in any of the high-level languages used to write smart contracts, such as Archetype, LIGO, and SmartPy. These languages all compile down to Michelson and you can switch between languages based on your preferences and projects.
NOTE: For this tutorial we are going to use SmartPy for building Tezos smart contracts utilizing the broadly adopted Python language.
"},{"location":"tutorials/custom_contracts/tezos/#example-smart-contract","title":"Example smart contract","text":"First let's look at a simple contract smart contract called SimpleStorage, which we will be using on a Tezos blockchain. Here we have one state variable called 'storedValue' and initialized with the value 12. During initialization the type of the variable was defined as 'int'. You can see more at SmartPy types. And then we added a simple test, which set the storage value to 15 and checks that the value was changed as expected.
NOTE: Smart contract's tests (marked with @sp.add_test annotation) are used to verify the validity of contract entrypoints and do not affect the state of the contract during deployment.
Here is the source for this contract:
import smartpy as sp\n\n@sp.module\ndef main():\n # Declares a new contract\n class SimpleStorage(sp.Contract):\n # Storage. Persists in between transactions\n def __init__(self, value):\n self.data.x = value\n\n # Allows the stored integer to be changed\n @sp.entrypoint\n def set(self, params):\n self.data.x = params.value\n\n # Returns the currently stored integer\n @sp.onchain_view()\n def get(self):\n return self.data.x\n\n@sp.add_test()\ndef test():\n # Create a test scenario\n scenario = sp.test_scenario(\"Test simple storage\", main)\n scenario.h1(\"SimpleStorage\")\n\n # Initialize the contract\n c = main.SimpleStorage(12)\n\n # Run some test cases\n scenario += c\n c.set(value=15)\n scenario.verify(c.data.x == 15)\n scenario.verify(scenario.compute(c.get()) == 15)\n"},{"location":"tutorials/custom_contracts/tezos/#contract-deployment-via-smartpy-ide","title":"Contract deployment via SmartPy IDE","text":"To deploy the contract, we will use SmartPy IDE.
Here we can see that our new contract address is KT1ED4gj2xZnp8318yxa5NpvyvW15pqe4yFg. This is the address that we will reference in the rest of this guide.
To deploy the contract we can use HTTP API: POST http://localhost:5000/api/v1/namespaces/default/contracts/deploy
{\n \"contract\": {\n \"code\": [\n {\n \"prim\": \"storage\",\n \"args\": [\n {\n \"prim\": \"int\"\n }\n ]\n },\n {\n \"prim\": \"parameter\",\n \"args\": [\n {\n \"prim\": \"int\",\n \"annots\": [\"%set\"]\n }\n ]\n },\n {\n \"prim\": \"code\",\n \"args\": [\n [\n {\n \"prim\": \"CAR\"\n },\n {\n \"prim\": \"NIL\",\n \"args\": [\n {\n \"prim\": \"operation\"\n }\n ]\n },\n {\n \"prim\": \"PAIR\"\n }\n ]\n ]\n },\n {\n \"prim\": \"view\",\n \"args\": [\n {\n \"string\": \"get\"\n },\n {\n \"prim\": \"unit\"\n },\n {\n \"prim\": \"int\"\n },\n [\n {\n \"prim\": \"CDR\"\n }\n ]\n ]\n }\n ],\n \"storage\": {\n \"int\": \"12\"\n }\n }\n}\n The contract field has two fields - code with Michelson code of contract and storage with initial Storage values.
The response of request above:
{\n \"id\": \"0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"namespace\": \"default\",\n \"tx\": \"21d03e6d-d106-48f4-aacd-688bf17b71fd\",\n \"type\": \"blockchain_deploy\",\n \"status\": \"Pending\",\n \"plugin\": \"tezos\",\n \"input\": {\n \"contract\": {\n \"code\": [\n {\n \"args\": [\n {\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"storage\"\n },\n {\n \"args\": [\n {\n \"annots\": [\"%set\"],\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"parameter\"\n },\n {\n \"args\": [\n [\n {\n \"prim\": \"CAR\"\n },\n {\n \"args\": [\n {\n \"prim\": \"operation\"\n }\n ],\n \"prim\": \"NIL\"\n },\n {\n \"prim\": \"PAIR\"\n }\n ]\n ],\n \"prim\": \"code\"\n },\n {\n \"args\": [\n {\n \"string\": \"get\"\n },\n {\n \"prim\": \"unit\"\n },\n {\n \"prim\": \"int\"\n },\n [\n {\n \"prim\": \"CDR\"\n }\n ]\n ],\n \"prim\": \"view\"\n }\n ],\n \"storage\": {\n \"int\": \"12\"\n }\n },\n \"definition\": null,\n \"input\": null,\n \"key\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"options\": null\n },\n \"created\": \"2024-04-01T14:20:20.665039Z\",\n \"updated\": \"2024-04-01T14:20:20.665039Z\"\n}\n The success result of deploy can be checked by GET http://localhost:5000/api/v1/namespaces/default/operations/0c3810c7-baed-4077-9d2c-af316a4a567f where 0c3810c7-baed-4077-9d2c-af316a4a567f is operation id from response above.
The success response:
{\n \"id\": \"0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"namespace\": \"default\",\n \"tx\": \"21d03e6d-d106-48f4-aacd-688bf17b71fd\",\n \"type\": \"blockchain_deploy\",\n \"status\": \"Succeeded\",\n \"plugin\": \"tezos\",\n \"input\": {\n \"contract\": {\n \"code\": [\n {\n \"args\": [\n {\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"storage\"\n },\n {\n \"args\": [\n {\n \"annots\": [\"%set\"],\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"parameter\"\n },\n {\n \"args\": [\n [\n {\n \"prim\": \"CAR\"\n },\n {\n \"args\": [\n {\n \"prim\": \"operation\"\n }\n ],\n \"prim\": \"NIL\"\n },\n {\n \"prim\": \"PAIR\"\n }\n ]\n ],\n \"prim\": \"code\"\n },\n {\n \"args\": [\n {\n \"string\": \"get\"\n },\n {\n \"prim\": \"unit\"\n },\n {\n \"prim\": \"int\"\n },\n [\n {\n \"prim\": \"CDR\"\n }\n ]\n ],\n \"prim\": \"view\"\n }\n ],\n \"storage\": {\n \"int\": \"12\"\n }\n },\n \"definition\": null,\n \"input\": null,\n \"key\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"options\": null\n },\n \"output\": {\n \"headers\": {\n \"requestId\": \"default:0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"type\": \"TransactionSuccess\"\n },\n \"protocolId\": \"ProxfordYmVfjWnRcgjWH36fW6PArwqykTFzotUxRs6gmTcZDuH\",\n \"transactionHash\": \"ootDut4xxR2yeYz6JuySuyTVZnXgda2t8SYrk3iuJpm531TZuCj\"\n },\n \"created\": \"2024-04-01T14:20:20.665039Z\",\n \"updated\": \"2024-04-01T14:20:20.665039Z\",\n \"detail\": {\n \"created\": \"2024-04-01T14:20:21.928976Z\",\n \"firstSubmit\": \"2024-04-01T14:20:22.714493Z\",\n \"from\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"gasPrice\": \"0\",\n \"historySummary\": [\n {\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:21.930764Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:21.930765Z\",\n \"subStatus\": \"Received\"\n },\n {\n \"action\": \"AssignNonce\",\n \"count\": 2,\n \"firstOccurrence\": \"2024-04-01T14:20:21.930767Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:22.714772Z\"\n },\n {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:22.714774Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:22.714774Z\"\n },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:22.715269Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:22.715269Z\"\n },\n {\n \"action\": \"ReceiveReceipt\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:29.244396Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:29.244396Z\"\n },\n {\n \"action\": \"Confirm\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:29.244762Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:29.244762Z\"\n }\n ],\n \"id\": \"default:0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"lastSubmit\": \"2024-04-01T14:20:22.714493Z\",\n \"nonce\": \"23094946\",\n \"policyInfo\": {},\n \"receipt\": {\n \"blockHash\": \"BLvWL4t8GbaufGcQwiv3hHCsvgD6qwXfAXofyvojSMoFeGMXMR1\",\n \"blockNumber\": \"5868268\",\n \"contractLocation\": {\n \"address\": \"KT1CkTPsgTUQxR3CCpvtrcuQFV5Jf7cJgHFg\"\n },\n \"extraInfo\": [\n {\n \"consumedGas\": \"584\",\n \"contractAddress\": \"KT1CkTPsgTUQxR3CCpvtrcuQFV5Jf7cJgHFg\",\n \"counter\": null,\n \"errorMessage\": null,\n \"fee\": null,\n \"from\": null,\n \"gasLimit\": null,\n \"paidStorageSizeDiff\": \"75\",\n \"status\": \"applied\",\n \"storage\": null,\n \"storageLimit\": null,\n \"storageSize\": \"75\",\n \"to\": null\n }\n ],\n \"protocolId\": \"ProxfordYmVfjWnRcgjWH36fW6PArwqykTFzotUxRs6gmTcZDuH\",\n \"success\": true,\n \"transactionIndex\": \"0\"\n },\n \"sequenceId\": \"018e9a08-582a-01ec-9209-9d79ef742c9b\",\n \"status\": \"Succeeded\",\n \"transactionData\": \"c37274b662d68da8fdae2a02ad6c460a79933c70c6fa7500dc98a9ade6822f026d00673bb6e6298063f97940953de23d441ab20bf757f602a3cd810bad05b003000000000041020000003c0500045b00000004257365740501035b050202000000080316053d036d03420991000000130100000003676574036c035b020000000203170000000000000002000c\",\n \"transactionHash\": \"ootDut4xxR2yeYz6JuySuyTVZnXgda2t8SYrk3iuJpm531TZuCj\",\n \"transactionHeaders\": {\n \"from\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"nonce\": \"23094946\"\n },\n \"updated\": \"2024-04-01T14:20:29.245172Z\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#the-firefly-interface-format","title":"The FireFly Interface Format","text":"As we know from the previous section - smart contracts on the Tezos blockchain are using the domain-specific, stack-based programming language called Michelson. It is a key component of the Tezos platform and plays a fundamental role in defining the behavior of smart contracts and facilitating their execution. This language is very efficient but also a bit tricky and challenging for learning, so in order to teach FireFly how to interact with the smart contract, we will be using FireFly Interface (FFI) to define the contract inteface which later will be encoded to Michelson.
"},{"location":"tutorials/custom_contracts/tezos/#schema-details","title":"Schema details","text":"The details field is used to encapsulate blockchain specific type information about a specific field. (More details at schema details)
internalType is a field which is used to describe tezos primitive types
{\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\"\n }\n}\n internalSchema in turn is used to describe more complex tezos types as list, struct or variant
Struct example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"metadata\",\n \"type\": \"bytes\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n}\n List example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"metadata\",\n \"type\": \"bytes\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n}\n Variant example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"variant\",\n \"variants\": [\"add_operator\", \"remove_operator\"],\n \"args\": [\n {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"owner\",\n \"type\": \"address\"\n },\n {\n \"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n ]\n }\n }\n}\n Map example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"map\",\n \"args\": [\n {\n \"name\": \"key\",\n \"type\": \"integer\"\n },\n {\n \"name\": \"value\",\n \"type\": \"string\"\n }\n ]\n }\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#options","title":"Options","text":"Option type is used to indicate a value as optional (see more at smartpy options)
{\n \"details\": {\n \"type\": \"string\",\n \"internalType\": \"string\",\n \"kind\": \"option\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#fa2-example","title":"FA2 example","text":"The following FFI sample demonstrates the specification for the widely used FA2 (analogue of ERC721 for EVM) smart contract:
{\n \"namespace\": \"default\",\n \"name\": \"fa2\",\n \"version\": \"v1.0.0\",\n \"description\": \"\",\n \"methods\": [\n {\n \"name\": \"burn\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"token_ids\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"nat\",\n \"internalType\": \"nat\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"destroy\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": []\n },\n {\n \"name\": \"mint\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\"\n }\n }\n },\n {\n \"name\": \"requests\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"metadata\",\n \"type\": \"bytes\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"pause\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"pause\",\n \"schema\": {\n \"type\": \"boolean\",\n \"details\": {\n \"type\": \"boolean\",\n \"internalType\": \"boolean\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"select\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"batch\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n },\n {\n \"name\": \"recipient\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id_start\",\n \"type\": \"nat\"\n },\n {\n \"name\": \"token_id_end\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"transfer\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"batch\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"from_\",\n \"type\": \"address\"\n },\n {\n \"name\": \"txs\",\n \"type\": \"list\",\n \"args\": [\n {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"to_\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n },\n {\n \"name\": \"amount\",\n \"type\": \"nat\"\n }\n ]\n }\n ]\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"update_admin\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"admin\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"update_operators\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"requests\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"variant\",\n \"variants\": [\"add_operator\", \"remove_operator\"],\n \"args\": [\n {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"owner\",\n \"type\": \"address\"\n },\n {\n \"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": []\n}\n"},{"location":"tutorials/custom_contracts/tezos/#broadcast-the-contract-interface","title":"Broadcast the contract interface","text":"Now that we have a FireFly Interface representation of our smart contract, we want to broadcast that to the entire network. This broadcast will be pinned to the blockchain, so we can always refer to this specific name and version, and everyone in the network will know exactly which contract interface we are talking about.
We will use the FFI JSON constructed above and POST that to the /contracts/interfaces API endpoint.
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces
{\n \"namespace\": \"default\",\n \"name\": \"simplestorage\",\n \"version\": \"v1.0.0\",\n \"description\": \"\",\n \"methods\": [\n {\n \"name\": \"set\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"integer\",\n \"internalType\": \"integer\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"get\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": []\n }\n ],\n \"events\": []\n}\n"},{"location":"tutorials/custom_contracts/tezos/#response","title":"Response","text":"{\n \"id\": \"f9e34787-e634-46cd-af47-b52c537404ff\",\n \"namespace\": \"default\",\n \"name\": \"simplestorage\",\n \"description\": \"\",\n \"version\": \"v1.0.0\",\n \"methods\": [\n {\n \"id\": \"78f13a7f-7b85-47c3-bf51-346a9858c027\",\n \"interface\": \"f9e34787-e634-46cd-af47-b52c537404ff\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"pathname\": \"set\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"integer\",\n \"internalType\": \"integer\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"id\": \"ee864e25-c3f7-42d3-aefd-a82f753e9002\",\n \"interface\": \"f9e34787-e634-46cd-af47-b52c537404ff\",\n \"name\": \"get\",\n \"namespace\": \"tezos\",\n \"pathname\": \"get\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": []\n }\n ]\n}\n NOTE: We can broadcast this contract interface conveniently with the help of FireFly Sandbox running at http://127.0.0.1:5108
Contracts SectionDefine a Contract InterfaceFFI - FireFly Interface in the Interface Fromat dropdownFFI JSON crafted by you into the Schema FieldRunNow comes the fun part where we see some of the powerful, developer-friendly features of FireFly. The next thing we're going to do is tell FireFly to build an HTTP API for this smart contract, complete with an OpenAPI Specification and Swagger UI. As part of this, we'll also tell FireFly where the contract is on the blockchain.
Like the interface broadcast above, this will also generate a broadcast which will be pinned to the blockchain so all the members of the network will be aware of and able to interact with this API.
We need to copy the id field we got in the response from the previous step to the interface.id field in the request body below. We will also pick a name that will be part of the URL for our HTTP API, so be sure to pick a name that is URL friendly. In this case we'll call it simple-storage. Lastly, in the location.address field, we're telling FireFly where an instance of the contract is deployed on-chain.
NOTE: The location field is optional here, but if it is omitted, it will be required in every request to invoke or query the contract. This can be useful if you have multiple instances of the same contract deployed to different addresses.
POST http://localhost:5000/api/v1/namespaces/default/apis
{\n \"name\": \"simple-storage\",\n \"interface\": {\n \"id\": \"f9e34787-e634-46cd-af47-b52c537404ff\"\n },\n \"location\": {\n \"address\": \"KT1ED4gj2xZnp8318yxa5NpvyvW15pqe4yFg\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#response_1","title":"Response","text":"{\n \"id\": \"af09de97-741d-4f61-8d30-4db5e7460f76\",\n \"namespace\": \"default\",\n \"interface\": {\n \"id\": \"f9e34787-e634-46cd-af47-b52c537404ff\"\n },\n \"location\": {\n \"address\": \"KT1ED4gj2xZnp8318yxa5NpvyvW15pqe4yFg\"\n },\n \"name\": \"simple-storage\",\n \"urls\": {\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#view-openapi-spec-for-the-contract","title":"View OpenAPI spec for the contract","text":"You'll notice in the response body that there are a couple of URLs near the bottom. If you navigate to the one labeled ui in your browser, you should see the Swagger UI for your smart contract.
Now that we've got everything set up, it's time to use our smart contract! We're going to make a POST request to the invoke/set endpoint to set the integer value on-chain. Let's set it to the value of 3 right now.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/invoke/set
{\n \"input\": {\n \"newValue\": 3\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#response_2","title":"Response","text":"{\n \"id\": \"87c7ee1b-33d1-46e2-b3f5-8566c14367cf\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Pending\",\n \"...\"\n}\n You'll notice that we got an ID back with status Pending, and that's expected due to the asynchronous programming model of working with smart contracts in FireFly. To see what the value is now, we can query the smart contract.
To make a read-only request to the blockchain to check the current value of the stored integer, we can make a POST to the query/get endpoint.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/query/get
{}\n"},{"location":"tutorials/custom_contracts/tezos/#response_3","title":"Response","text":"{\n \"3\"\n}\n NOTE: Some contracts may have queries that require input parameters. That's why the query endpoint is a POST, rather than a GET so that parameters can be passed as JSON in the request body. This particular function does not have any parameters, so we just pass an empty JSON object.
Tokens are a critical building block in many blockchain-backed applications. Fungible tokens can represent a store of value or a means of rewarding participation in a multi-party system, while non-fungible tokens provide a clear way to identify and track unique entities across the network. FireFly provides flexible mechanisms to operate on any type of token and to tie those operations to on- and off-chain data.
Token pools are a FireFly construct for describing a set of tokens. The exact definition of a token pool is dependent on the token connector implementation. Some examples of how pools might map to various well-defined Ethereum standards:
These are provided as examples only - a custom token connector could be backed by any token technology (Ethereum or otherwise) as long as it can support the basic operations described here (create pool, mint, burn, transfer). Other FireFly repos include a sample implementation of a token connector for ERC-20 and ERC-721 as well as ERC-1155.
"},{"location":"tutorials/tokens/erc1155/","title":"Use ERC-1155 tokens","text":""},{"location":"tutorials/tokens/erc1155/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/tokens/erc1155/#create-a-stack-with-an-erc-1155-connector","title":"Create a stack with an ERC-1155 connector","text":"The default token connector that the FireFly CLI sets up is for ERC-20 and ERC-721. If you would like to work with ERC-1155 tokens, you need to create a stack that is configured to use that token connector. To do that, run:
ff init ethereum -t erc-1155\n Then run:
ff start <your_stack_name>\n"},{"location":"tutorials/tokens/erc1155/#about-the-sample-token-contract","title":"About the sample token contract","text":"When the FireFly CLI set up your FireFly stack, it also deployed a sample ERC-1155 contract that conforms to the expectations of the token connector. When you create a token pool through FireFly's token APIs, that contract will be used by default.
\u26a0\ufe0f WARNING: The default token contract that was deployed by the FireFly CLI is only provided for the purpose of learning about FireFly. It is not a production grade contract. If you intend to deploy a production application using tokens on FireFly, you should research token contract best practices. For details, please see the source code for the contract that was deployed."},{"location":"tutorials/tokens/erc1155/#use-the-sandbox-optional","title":"Use the Sandbox (optional)","text":"At this point you could open the Sandbox at http://127.0.0.1:5109/home?action=tokens.pools and perform the functions outlined in the rest of this guide. Or you can keep reading to learn how to build HTTP requests to work with tokens in FireFly.
"},{"location":"tutorials/tokens/erc1155/#create-a-pool-using-default-token-contract","title":"Create a pool (using default token contract)","text":"After your stack is up and running, the first thing you need to do is create a token pool. Every application will need at least one token pool. At a minimum, you must always specify a name and type (fungible or nonfungible) for the pool.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\"\n}\n Other parameters:
connector if you have configured multiple token connectorsconfig object of additional parameters, if supported by your token connectorkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityIf you wish to use a contract that is already on the chain, it is recommended that you first upload the ABI for your specific contract by creating a FireFly contract interface. This step is optional if you're certain that your ERC-1155 ABI conforms to the default expectations of the token connector, but is generally recommended.
See the README of the token connector for details on what contract variants can currently be understood.
You can pass a config object with an address when you make the request to create the token pool, and if you created a contract interface, you can include the interface ID as well.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\",\n \"interface\": {\n \"id\": \"b9e5e1ce-97bb-4a35-a25c-52c7c3f523d8\"\n },\n \"config\": {\n \"address\": \"0xb1C845D32966c79E23f733742Ed7fCe4B41901FC\"\n }\n}\n"},{"location":"tutorials/tokens/erc1155/#mint-tokens","title":"Mint tokens","text":"Once you have a token pool, you can mint tokens within it. With the default sample contract, only the creator of a pool is allowed to mint - but each contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/mint
{\n \"amount\": 10\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityto if you'd like to send the minted tokens to a specific identity (default is the same as key)You may transfer tokens within a pool by specifying an amount and a destination understood by the connector (i.e. an Ethereum address). With the default sample contract, only the owner of a token or another approved account may transfer it away - but each contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\"\n}\n NOTE: When transferring a non-fungible token, the amount must always be 1. The tokenIndex field is also required when transferring a non-fungible token.
Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to send tokens from a specific identity (default is the same as key)All transfers (as well as mint/burn operations) support an optional message parameter that contains a broadcast or private message to be sent along with the transfer. This message follows the same convention as other FireFly messages, and may be comprised of text or blob data, and can provide context, metadata, or other supporting information about the transfer. The message will be batched, hashed, and pinned to the primary blockchain.
The message ID and hash will also be sent to the token connector as part of the transfer operation, to be written to the token blockchain when the transaction is submitted. All recipients of the message will then be able to correlate the message with the token transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n"},{"location":"tutorials/tokens/erc1155/#private-message","title":"Private message","text":"{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"header\": {\n \"type\": \"transfer_private\"\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n Note that all parties in the network will be able to see the transfer (including the message ID and hash), but only the recipients of the message will be able to view the actual message data.
"},{"location":"tutorials/tokens/erc1155/#burn-tokens","title":"Burn tokens","text":"You may burn tokens by simply specifying an amount. With the default sample contract, only the owner of a token or another approved account may burn it - but each connector may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/burn
{\n \"amount\": 1\n}\n NOTE: When burning a non-fungible token, the amount must always be 1. The tokenIndex field is also required when burning a non-fungible token.
Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to burn tokens from a specific identity (default is the same as key)You can also approve other wallets to transfer tokens on your behalf with the /approvals API. The important fields in a token approval API request are as follows:
approved: Sets whether another account is allowed to transfer tokens out of this wallet or not. If not specified, will default to true. Setting to false can revoke an existing approval.operator: The other account that is allowed to transfer tokens out of the wallet specified in the key fieldkey: The wallet address for the approval. If not set, it defaults to the address of the FireFly node submitting the transactionHere is an example request that would let the signing account 0x634ee8c7d0894d086c7af1fc8514736aed251528 transfer any amount of tokens from my wallet
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/approvals
{\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\"\n}\n"},{"location":"tutorials/tokens/erc1155/#response","title":"Response","text":"{\n \"localId\": \"46fef50a-cf93-4f92-acf8-fae161b37362\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc1155\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"approved\": true,\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"00faa011-f42c-403d-a047-2df7318967cd\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/","title":"Use ERC-20 tokens","text":""},{"location":"tutorials/tokens/erc20/#previous-steps-start-your-environment","title":"Previous steps: Start your environment","text":"If you haven't started a FireFly stack already, please go to the Getting Started guide on how to Start your environment. This will set up a token connector that works with both ERC-20 and ERC-721 by default.
\u2190 \u2461 Start your environment
"},{"location":"tutorials/tokens/erc20/#about-the-sample-token-contracts","title":"About the sample token contracts","text":"If you are using the default ERC-20 / ERC-721 token connector, when the FireFly CLI set up your FireFly stack, it also deployed a token factory contract. When you create a token pool through FireFly's token APIs, the token factory contract will automatically deploy an ERC-20 or ERC-721 contract, based on the pool type in the API request.
At this point you could open the Sandbox at http://127.0.0.1:5109/home?action=tokens.pools and perform the functions outlined in the rest of this guide. Or you can keep reading to learn how to build HTTP requests to work with tokens in FireFly.
"},{"location":"tutorials/tokens/erc20/#create-a-pool-using-default-token-factory","title":"Create a pool (using default token factory)","text":"After your stack is up and running, the first thing you need to do is create a token pool. Every application will need at least one token pool. At a minimum, you must always specify a name and type for the pool.
If you're using the default ERC-20 / ERC-721 token connector and its sample token factory, it will automatically deploy a new ERC-20 contract instance.
"},{"location":"tutorials/tokens/erc20/#request","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\"\n}\n"},{"location":"tutorials/tokens/erc20/#response","title":"Response","text":"{\n \"id\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"type\": \"fungible\",\n \"namespace\": \"default\",\n \"name\": \"testpool\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"connector\": \"erc20_erc721\",\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"e901921e-ffc4-4776-b20a-9e9face70a47\"\n },\n \"published\": true\n}\n Other parameters:
connector if you have configured multiple token connectorsconfig object of additional parameters, if supported by your token connectorkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityTo lookup the address of the new contract, you can lookup the token pool by its ID on the API. Creating the token pool will also emit an event which will contain the address. To query the token pool you can make a GET request to the pool's ID:
GET http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools/5811e8d5-52d0-44b1-8b75-73f5ff88f598
{\n \"id\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"type\": \"fungible\",\n \"namespace\": \"default\",\n \"name\": \"testpool\",\n \"standard\": \"ERC20\",\n \"locator\": \"address=0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c&schema=ERC20WithData&type=fungible\",\n \"decimals\": 18,\n \"connector\": \"erc20_erc721\",\n \"message\": \"7e2f6004-31fd-4ba8-9845-15c5fe5fbcd7\",\n \"state\": \"confirmed\",\n \"created\": \"2022-04-28T14:03:16.732222381Z\",\n \"info\": {\n \"address\": \"0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c\",\n \"name\": \"testpool\",\n \"schema\": \"ERC20WithData\"\n },\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"e901921e-ffc4-4776-b20a-9e9face70a47\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#create-a-pool-from-a-deployed-token-contract","title":"Create a pool (from a deployed token contract)","text":"If you wish to index and use a contract that is already on the chain, it is recommended that you first upload the ABI for your specific contract by creating a FireFly contract interface. This step is optional if you're certain that your ERC-20 ABI conforms to the default expectations of the token connector, but is generally recommended.
See the README of the token connector for details on what contract variants can currently be understood.
You can pass a config object with an address and blockNumber when you make the request to create the token pool, and if you created a contract interface, you can include the interface ID as well.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\",\n \"interface\": {\n \"id\": \"b9e5e1ce-97bb-4a35-a25c-52c7c3f523d8\"\n },\n \"config\": {\n \"address\": \"0xb1C845D32966c79E23f733742Ed7fCe4B41901FC\",\n \"blockNumber\": \"0\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#mint-tokens","title":"Mint tokens","text":"Once you have a token pool, you can mint tokens within it. When using the sample contract deployed by the CLI, only the creator of a pool is allowed to mint, but a different contract may define its own permission model.
NOTE: The default sample contract uses 18 decimal places. This means that if you want to create 100 tokens, the number submitted to the API / blockchain should actually be 100\u00d71018 = 100000000000000000000. This allows users to work with \"fractional\" tokens even though Ethereum virtual machines only support integer arithmetic.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/mint
{\n \"amount\": \"100000000000000000000\"\n}\n"},{"location":"tutorials/tokens/erc20/#response_2","title":"Response","text":"{\n \"type\": \"mint\",\n \"localId\": \"835fe2a1-594b-4336-bc1d-b2f59d51064b\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"amount\": \"100000000000000000000\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"3fc97e24-fde1-4e80-bd82-660e479c0c43\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityto if you'd like to send the minted tokens to a specific identity (default is the same as key)You may transfer tokens within a pool by specifying an amount and a destination understood by the connector (i.e. an Ethereum address). With the default sample contract, only the owner of the tokens or another approved account may transfer their tokens, but a different contract may define its own permission model.
"},{"location":"tutorials/tokens/erc20/#request_4","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": \"10000000000000000000\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\"\n}\n"},{"location":"tutorials/tokens/erc20/#response_3","title":"Response","text":"{\n \"type\": \"transfer\",\n \"localId\": \"61f0a71f-712b-4778-8b37-784fbee52657\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\",\n \"amount\": \"10000000000000000000\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"c0c316a3-23a9-42f3-89b3-1cfdba6c948d\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to send tokens from a specific identity (default is the same as key)All transfers (as well as mint/burn operations) support an optional message parameter that contains a broadcast or private message to be sent along with the transfer. This message follows the same convention as other FireFly messages, and may be comprised of text or blob data, and can provide context, metadata, or other supporting information about the transfer. The message will be batched, hashed, and pinned to the primary blockchain.
The message ID and hash will also be sent to the token connector as part of the transfer operation, to be written to the token blockchain when the transaction is submitted. All recipients of the message will then be able to correlate the message with the token transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n"},{"location":"tutorials/tokens/erc20/#private-message","title":"Private message","text":"{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"header\": {\n \"type\": \"transfer_private\"\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n Note that all parties in the network will be able to see the transfer (including the message ID and hash), but only the recipients of the message will be able to view the actual message data.
"},{"location":"tutorials/tokens/erc20/#burn-tokens","title":"Burn tokens","text":"You may burn tokens by simply specifying an amount. With the default sample contract, only the owner of a token or another approved account may burn it, but a different contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/burn
{\n \"amount\": 1\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to burn tokens from a specific identity (default is the same as key)You can also approve other wallets to transfer tokens on your behalf with the /approvals API. The important fields in a token approval API request are as follows:
approved: Sets whether another account is allowed to transfer tokens out of this wallet or not. If not specified, will default to true. Setting to false can revoke an existing approval.operator: The other account that is allowed to transfer tokens out of the wallet specified in the key field.config.allowance: The number of tokens the other account is allowed to transfer. If 0 or not set, the approval is valid for any number.key: The wallet address for the approval. If not set, it defaults to the address of the FireFly node submitting the transaction.Here is an example request that would let the signing account 0x634ee8c7d0894d086c7af1fc8514736aed251528 transfer up to 10\u00d71018 (10000000000000000000) tokens from my wallet
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/approvals
{\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"config\": {\n \"allowance\": \"10000000000000000000\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#response_4","title":"Response","text":"{\n \"localId\": \"46fef50a-cf93-4f92-acf8-fae161b37362\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"approved\": true,\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"00faa011-f42c-403d-a047-2df7318967cd\"\n },\n \"config\": {\n \"allowance\": \"10000000000000000000\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#use-metamask","title":"Use Metamask","text":"Now that you have an ERC-20 contract up and running, you may be wondering how to use Metamask (or some other wallet) with this contract. This section will walk you through how to connect Metamask to the blockchain and token contract that FireFly is using.
"},{"location":"tutorials/tokens/erc20/#configure-a-new-network","title":"Configure a new network","text":"The first thing we need to do is tell Metamask how to connect to our local blockchain node. To do that:
In the drop down menu, click Settings
On the left hand side of the page, click Networks
Click the Add a network button
Fill in the network details:
FireFly (could be any name)http://127.0.0.1:51002021Metamask won't know about our custom ERC-20 contract until we give it the Ethereum address for the contract, so that's what we'll do next.
Click on Import tokens
Enter the Ethereum address of the contract
NOTE: You can find the address of your contract from the response to the request to create the token pool above. You can also do a GET to http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools to lookup your configured token pools.
Now you can copy your account address from your Metamask wallet, and perform a transfer from FireFly's API (as described above) to your Metamask address.
After a couple seconds, you should see your tokens show up in your Metamask wallet.
You can also send tokens to a FireFly address or any other Ethereum address from your Metamask wallet.
NOTE: You can find the Ethereum addresses for organizations in your FireFly network in the Network \u2192 Organizations page in the FireFly explorer. Click on an organization and look under the Verifiers header for the organization's Ethereum address.
"},{"location":"tutorials/tokens/erc721/","title":"Use ERC-721 tokens","text":""},{"location":"tutorials/tokens/erc721/#previous-steps-start-your-environment","title":"Previous steps: Start your environment","text":"If you haven't started a FireFly stack already, please go to the Getting Started guide on how to Start your environment. This will set up a token connector that works with both ERC-20 and ERC-721 by default.
\u2190 \u2461 Start your environment
"},{"location":"tutorials/tokens/erc721/#about-the-sample-token-contracts","title":"About the sample token contracts","text":"If you are using the default ERC-20 / ERC-721 token connector, when the FireFly CLI set up your FireFly stack, it also deployed a token factory contract. When you create a token pool through FireFly's token APIs, the token factory contract will automatically deploy an ERC-20 or ERC-721 contract, based on the pool type in the API request.
At this point you could open the Sandbox at http://127.0.0.1:5109/home?action=tokens.pools and perform the functions outlined in the rest of this guide. Or you can keep reading to learn how to build HTTP requests to work with tokens in FireFly.
"},{"location":"tutorials/tokens/erc721/#create-a-pool-using-default-token-factory","title":"Create a pool (using default token factory)","text":"After your stack is up and running, the first thing you need to do is create a token pool. Every application will need at least one token pool. At a minimum, you must always specify a name and type for the pool.
If you're using the default ERC-20 / ERC-721 token connector and its sample token factory, it will automatically deploy a new ERC-721 contract instance.
"},{"location":"tutorials/tokens/erc721/#request","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"type\": \"nonfungible\",\n \"name\": \"nfts\"\n}\n"},{"location":"tutorials/tokens/erc721/#response","title":"Response","text":"{\n \"id\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"type\": \"nonfungible\",\n \"namespace\": \"default\",\n \"name\": \"nfts\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"connector\": \"erc20_erc721\",\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"00678116-89d2-4295-990c-bd5ffa6e2434\"\n },\n \"published\": true\n}\n Other parameters:
connector if you have configured multiple token connectorsconfig object of additional parameters, if supported by your token connectorkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityTo lookup the address of the new contract, you can lookup the token pool by its ID on the API. Creating the token pool will also emit an event which will contain the address. To query the token pool you can make a GET request to the pool's ID:
GET http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools/5811e8d5-52d0-44b1-8b75-73f5ff88f598
{\n \"id\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"type\": \"nonfungible\",\n \"namespace\": \"default\",\n \"name\": \"nfts\",\n \"standard\": \"ERC721\",\n \"locator\": \"address=0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c&schema=ERC721WithData&type=nonfungible\",\n \"connector\": \"erc20_erc721\",\n \"message\": \"53d95dda-e8ca-4546-9226-a0fdc6ec03ec\",\n \"state\": \"confirmed\",\n \"created\": \"2022-04-29T12:03:51.971349509Z\",\n \"info\": {\n \"address\": \"0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c\",\n \"name\": \"nfts\",\n \"schema\": \"ERC721WithData\"\n },\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"00678116-89d2-4295-990c-bd5ffa6e2434\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#create-a-pool-from-a-deployed-token-contract","title":"Create a pool (from a deployed token contract)","text":"If you wish to index and use a contract that is already on the chain, it is recommended that you first upload the ABI for your specific contract by creating a FireFly contract interface. This step is optional if you're certain that your ERC-721 ABI conforms to the default expectations of the token connector, but is generally recommended.
See the README of the token connector for details on what contract variants can currently be understood.
You can pass a config object with an address and blockNumber when you make the request to create the token pool, and if you created a contract interface, you can include the interface ID as well.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"nonfungible\",\n \"interface\": {\n \"id\": \"b9e5e1ce-97bb-4a35-a25c-52c7c3f523d8\"\n },\n \"config\": {\n \"address\": \"0xb1C845D32966c79E23f733742Ed7fCe4B41901FC\",\n \"blockNumber\": \"0\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#mint-a-token","title":"Mint a token","text":"Once you have a token pool, you can mint tokens within it. When using the sample contract deployed by the CLI, the following are true:
tokenIndex must be set to a unique valueamount must be 1A different ERC-721 contract may define its own requirements.
"},{"location":"tutorials/tokens/erc721/#request_3","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/mint
{\n \"amount\": \"1\",\n \"tokenIndex\": \"1\"\n}\n"},{"location":"tutorials/tokens/erc721/#response_2","title":"Response","text":"{\n \"type\": \"mint\",\n \"localId\": \"2de2e05e-9474-4a08-a64f-2cceb076bdaa\",\n \"pool\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"amount\": \"1\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"0fad4581-7cb2-42c7-8f78-62d32205c2c2\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityto if you'd like to send the minted tokens to a specific identity (default is the same as key)You may transfer tokens within a pool by specifying an amount and a destination understood by the connector (i.e. an Ethereum address). With the default sample contract, only the owner of the tokens or another approved account may transfer their tokens, but a different contract may define its own permission model.
When transferring an NFT, you must also specify the tokenIndex that you wish to transfer. The tokenIndex is simply the ID of the specific NFT within the pool that you wish to transfer.
NOTE: When transferring NFTs the amount must be 1. If you wish to transfer more NFTs, simply call the endpoint multiple times, specifying the token index of each token to transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": \"1\",\n \"tokenIndex\": \"1\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\"\n}\n"},{"location":"tutorials/tokens/erc721/#response_3","title":"Response","text":"{\n \"type\": \"transfer\",\n \"localId\": \"f5fd0d13-db13-4d70-9a99-6bcd747f1e42\",\n \"pool\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"tokenIndex\": \"1\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\",\n \"amount\": \"1\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"63c1a89b-240c-41eb-84bb-323d56f4ba5a\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to send tokens from a specific identity (default is the same as key)All transfers (as well as mint/burn operations) support an optional message parameter that contains a broadcast or private message to be sent along with the transfer. This message follows the same convention as other FireFly messages, and may be comprised of text or blob data, and can provide context, metadata, or other supporting information about the transfer. The message will be batched, hashed, and pinned to the primary blockchain.
The message ID and hash will also be sent to the token connector as part of the transfer operation, to be written to the token blockchain when the transaction is submitted. All recipients of the message will then be able to correlate the message with the token transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"tokenIndex\": \"1\",\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n"},{"location":"tutorials/tokens/erc721/#private-message","title":"Private message","text":"{\n \"amount\": 1,\n \"tokenIndex\": \"1\",\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"header\": {\n \"type\": \"transfer_private\"\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n Note that all parties in the network will be able to see the transfer (including the message ID and hash), but only the recipients of the message will be able to view the actual message data.
"},{"location":"tutorials/tokens/erc721/#burn-tokens","title":"Burn tokens","text":"You may burn a token by specifying the token's tokenIndex. With the default sample contract, only the owner of a token or another approved account may burn it, but a different contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/burn
{\n \"amount\": 1,\n \"tokenIndex\": \"1\"\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to burn tokens from a specific identity (default is the same as key)You can also approve other wallets to transfer tokens on your behalf with the /approvals API. The important fields in a token approval API request are as follows:
approved: Sets whether another account is allowed to transfer tokens out of this wallet or not. If not specified, will default to true. Setting to false can revoke an existing approval.operator: The other account that is allowed to transfer tokens out of the wallet specified in the key fieldconfig.tokenIndex: The specific token index within the pool that the operator is allowed to transfer. If 0 or not set, the approval is valid for all tokens.key: The wallet address for the approval. If not set, it defaults to the address of the FireFly node submitting the transactionHere is an example request that would let the signing account 0x634ee8c7d0894d086c7af1fc8514736aed251528 transfer tokenIndex 2 from my wallet.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/approvals
{\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"config\": {\n \"tokenIndex\": \"2\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#response_4","title":"Response","text":"{\n \"localId\": \"46fef50a-cf93-4f92-acf8-fae161b37362\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"approved\": true,\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"00faa011-f42c-403d-a047-2df7318967cd\"\n },\n \"config\": {\n \"tokenIndex\": \"2\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#use-metamask","title":"Use Metamask","text":"Now that you have an ERC-721 contract up and running, you may be wondering how to use Metamask (or some other wallet) with this contract. This section will walk you through how to connect Metamask to the blockchain and token contract that FireFly is using.
"},{"location":"tutorials/tokens/erc721/#configure-a-new-network","title":"Configure a new network","text":"The first thing we need to do is tell Metamask how to connect to our local blockchain node. To do that:
In the drop down menu, click Settings
On the left hand side of the page, click Networks
Click the Add a network button
Fill in the network details:
FireFly (could be any name)http://127.0.0.1:51002021Metamask won't know about our custom ERC-721 contract until we give it the Ethereum address for the contract, so that's what we'll do next.
Click on Import tokens
Enter the Ethereum address of the contract
NOTE: You can find the address of your contract from the response to the request to create the token pool above. You can also do a GET to http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools to lookup your configured token pools.
Now you can copy your account address from your Metamask wallet, and perform a transfer from FireFly's API (as described above) to your Metamask address.
After a couple seconds, you should see your token show up in your Metamask wallet.
NOTE: While the NFT token balance can be viewed in Metamask, it does not appear that Metamask supports sending these tokens to another address at this time.
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Hyperledger FireFly","text":"Hyperledger FireFly is an open source Supernode, a complete stack for enterprises to build and scale secure Web3 applications.
The easiest way to understand a FireFly Supernode is to think of it like a toolbox. Connect your existing apps and/or back office systems to the toolbox and within it there are two different sets of tools. One set of tools helps you connect to the Web3 world that already exists, and the other set allows you to build new decentralized applications quickly with security and scalability.
Head to the Understanding FireFly section for more details.
"},{"location":"SUMMARY/","title":"SUMMARY","text":"Hyperledger FireFly has a multi-tier pluggable architecture for supporting blockchains of all shapes and sizes. This includes a remote API that allows a microservice connector to be built from scratch in any programming language.
It also includes the Connector Toolkit, which is a pluggable SDK in Golang that provides a set of re-usable modules that can be used across blockchain implementations.
This is the preferred way to build a new blockchain connector, if you are comfortable with coding in Golang and there are language bindings available for the raw RPC interface of your blockchain.
"},{"location":"architecture/blockchain_connector_framework/#connector-toolkit-architecture","title":"Connector Toolkit Architecture","text":"The core component of the FireFly Connector Framework for Blockchains is a Go module called FireFly Transaction Manager (FFTM).
FFTM is responsible for:
Submission of transactions to blockchains of all types
Protocol connectivity decoupled with additional lightweight API connector
Easy to add additional protocols that conform to normal patterns of TX submission / events
Monitoring and updating blockchain operations
Receipts
Confirmations
Extensible transaction handler with capabilities such as:
Nonce management: idempotent submission of transactions, and assignment of nonces
Transaction process history
Event streaming
The framework is currently constrained to blockchains that adhere to certain basic principals:
Has transactions
That are signed
That can optionally have gas semantics (limits and prices, expressed in a blockchain specific way)
Has events (or \"logs\")
That are emitted as a deterministic outcome of transactions
Has blocks
Containing zero or more transactions, with their associated events
With a parent hash
Has finality for transactions & events that can be expressed as a level of confidence over time
Confirmations: A number of sequential blocks in the canonical chain that contain the transaction
The nonces for transactions is assigned as early as possible in the flow:
This \"at source\" allocation of nonces provides the strictest assurance of order of transactions possible, because the order is locked in with the coordination of the business logic of the application submitting the transaction.
As well as protecting against loss of transactions, this protects against duplication of transactions - even in crash recovery scenarios with a sufficiently reliable persistence layer.
"},{"location":"architecture/blockchain_connector_framework/#avoid-multiple-nonce-management-systems-against-the-same-signing-key","title":"Avoid multiple nonce management systems against the same signing key","text":"FFTM is optimized for cases where all transactions for a given signing address flow through the same FireFly connector. If you have signing and nonce allocation happening elsewhere, not going through the FireFly blockchain connector, then it is possible that the same nonce will be allocated in two places.
Be careful that the signing keys for transactions you stream through the Nonce Management of the FireFly blockchain connector are not used elsewhere.
If you must have multiple systems performing nonce management against the same keys you use with FireFly nonce management, you can set the transactions.nonceStateTimeout to 0 (or a low threshold like 100ms) to cause the nonce management to query the pending transaction pool of the node every time a nonce is allocated.
This reduces the window for concurrent nonce allocation to be small (basically the same as if you had multiple simple web/mobile wallets used against the same key), but it does not eliminate it completely it.
"},{"location":"architecture/blockchain_connector_framework/#why-at-source-nonce-management-was-chosen-vs-at-target","title":"Why \"at source\" nonce management was chosen vs. \"at target\"","text":"The \"at source\" approach to ordering used in FFTM could be compared with the \"at target\" allocation of nonces used in EthConnect).
The \"at target\" approach optimizes for throughput and ability to send new transactions to the chain, with an at-least-once delivery assurance to the applications.
An \"at target\" algorithm as used in EthConnect could resume transaction delivery automatically without operator intervention from almost all scenarios, including where nonces have been double allocated.
However, \"at target\" comes with two compromises that mean FFTM chose the \"at source\" approach was chosen for FFTM:
Individual transactions might fail in certain scenarios, and subsequent transactions will still be streamed to the chain. While desirable for automation and throughput, this reduces the ordering guarantee for high value transactions.
In crash recovery scenarios the assurance is at-least-once delivery for \"at target\" ordering (rather than \"exactly once\"), although the window can be made very small through various optimizations included in the EthConnect codebase.
The transaction Handler is a pluggable component that allows customized logic to be applied to the gas pricing, signing, submission and re-submission of transactions to the blockchain.
The transaction Handler can store custom state in the state store of the FFTM code, which is also reported in status within the FireFly API/Explorer on the operation.
A reference implementation is provided that:
eth_gasPrice for EVM JSON/RPC)The reference implementation is available here
"},{"location":"architecture/blockchain_connector_framework/#event-streams","title":"Event Streams","text":"One of the largest pieces of heavy lifting code in the FFTM codebase, is the event stream support. This provides a WebSocket (and Webhook) interface that FireFly Core and the Tokens Connectors connect to in order to receive ordered streams of events from the blockchain.
The interface down to the blockchain layer is via go channels, and there are lifecycle interactions over the FFCAPI to the blockchain specific code to add and remove listeners for different types of blockchain events.
Some high architectural principals that informed the code:
One of the most important roles FireFly has, is to take actions being performed by the local apps, process them, get them confirmed, and then deliver back as \"stream of consciousness\" to the application alongside all the other events that are coming into the application from other FireFly Nodes in the network.
You might observe the problems solved in this architecture are similar to those in a message queuing system (like Apache Kafka, or a JMS/AMQP provider like ActiveMQ etc.).
However, we cannot directly replace the internal logic with such a runtime - because FireFly's job is to aggregate data from multiple runtimes that behave similarly to these:
So FireFly provides the convenient REST based management interface to simplify the world for application developers, by aggregating the data from multiple locations, and delivering it to apps in a deterministic sequence.
The sequence is made deterministic:
The core architecture of a FireFly node can be broken down into the following three areas:
What fundamentally is a node - left side of the above diagram.
What are the core runtime responsibilities, and pluggable elements - right side of the above diagram.
Connectors and Infrastructure Runtimes.Connectors are the bridging runtimes, that know how to talk to a particular runtime.Infrastructure Runtimes are the core runtimes for multi-party system activities.What is the code structure inside the core.
This demonstrates the problem that at its core FireFly is there to solve. The internal plumbing complexity of just a very simple set of Enterprise blockchain / multi-party system interactions.
This is the kind of thing that enterprise projects have been solving ground-up since the dawn of enterprise blockchain, and the level of engineering required that is completely detached from business value, is very high.
The \"tramlines\" view shows how FireFly's pluggable model makes the job of the developer really simple:
This is deliberately a simple flow, and all kinds of additional layers might well layer on (and fit within the FireFly model):
This diagram shows the various plugins that are currently in the codebase and the layers in each plugin
This diagram shows the details of what goes into each layer of a FireFly plugin
"},{"location":"architecture/plugin_architecture/#overview","title":"Overview","text":"The FireFly node is built for extensibility, with separate pluggable runtimes orchestrated into a common API for developers. The mechanics of that pluggability for developers of new connectors is explained below:
This architecture is designed to provide separations of concerns to account for:
We welcome anyone to contribute to the FireFly project! If you're interested, this is a guide on how to get started. You don't have to be a blockchain expert to make valuable contributions! There are lots of places for developers of all experience levels to get involved.
\ud83e\uddd1\ud83c\udffd\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udffb\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udffe\u200d\ud83d\udcbb \ud83e\uddd1\ud83c\udffb\u200d\ud83d\udcbb \ud83e\uddd1\ud83c\udfff\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udffd\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udffd\u200d\ud83d\udcbb \ud83e\uddd1\ud83c\udffe\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udfff\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udffe\u200d\ud83d\udcbb \ud83d\udc69\ud83c\udfff\u200d\ud83d\udcbb \ud83d\udc68\ud83c\udffb\u200d\ud83d\udcbb
"},{"location":"contributors/#connect-with-us-on-discord","title":"\ud83d\ude80 Connect with us on Discord","text":"You can chat with maintainers and other contributors on Discord in the firefly channel: https://discord.gg/hyperledger
Join Discord Server
"},{"location":"contributors/#join-our-community-calls","title":"\ud83d\udcc5 Join our Community Calls","text":"Community calls are a place to talk to other contributors, maintainers, and other people interested in FireFly. Maintainers often discuss upcoming changes and proposed new features on these calls. These calls are a great way for the community to give feedback on new ideas, ask questions about FireFly, and hear how others are using FireFly to solve real world problems.
Please see the FireFly Calendar for the current meeting schedule, and the link to join. Everyone is welcome to join, regardless of background or experience level.
"},{"location":"contributors/#find-your-first-issue","title":"\ud83d\udd0d Find your first issue","text":"If you're looking for somewhere to get started in the FireFly project and want something small and relatively easy, take a look at issues tagged with \"Good first issue\". You can definitely work on other things if you want to. These are only suggestions for easy places to get started.
See \"Good First Issues\"
NOTE Hyperledger FireFly has a microservice architecture so it has many different GitHub repos. Use the link or the button above to look for \"Good First Issues\" across all the repos at once.
Here are some other suggestions of places to get started, based on experience you may already have:
"},{"location":"contributors/#any-level-of-experience","title":"Any level of experience","text":"If you looking to make your first open source contribution the FireFly documentation is a great place to make small, easy improvements. These improvements are also very valuable, because they help the next person that may want to know the same thing.
Here are some detailed instructions on Contributing to Documentation
"},{"location":"contributors/#go-experience","title":"Go experience","text":"If you have some experience in Go and really want to jump into FireFly, the FireFly Core is the heart of the project.
Here are some detailed instructions on Setting up a FireFly Core Development Environment.
"},{"location":"contributors/#little-or-no-go-experience-but-want-to-learn","title":"Little or no Go experience, but want to learn","text":"If you don't have a lot of experience with Go, but are interested in learning, the FireFly CLI might be a good place to start. The FireFly CLI is a tool to set up local instances of FireFly for building apps that use FireFly, and for doing development on FireFly itself.
"},{"location":"contributors/#typescript-experience","title":"TypeScript experience","text":"If you have some experience in TypeScript, there are several FireFly microservices that are written in TypeScript. The Data Exchange is used for private messaging between FireFly nodes. The ERC-20/ERC-271 Tokens Connector and ERC-1155 Tokens Connector are used to abstract token contract specifics from the FireFly Core.
"},{"location":"contributors/#reacttypescript-experience","title":"React/TypeScript experience","text":"If you want to do some frontend development, the FireFly UI is written in TypeScript and React.
"},{"location":"contributors/#go-and-blockchain-experience","title":"Go and blockchain experience","text":"If you already have some experience with blockchain and want to work on some backend components, the blockchain connectors, firefly-ethconnect (for Ethereum) and firefly-fabconnect (for Fabric) are great places to get involved.
"},{"location":"contributors/#make-changes","title":"\ud83d\udcdd Make changes","text":"To contribute to the repository, please fork the repository that you want to change. Then clone your fork locally on your machine and make your changes. As you commit your changes, push them to your fork. More information on making commits below.
"},{"location":"contributors/#commit-with-developer-certificate-of-origin","title":"\ud83d\udcd1 Commit with Developer Certificate of Origin","text":"As with all Hyperledger repositories, FireFly requires proper sign-off on every commit that is merged into the main branch. The sign-off indicates that you certify the changes you are submitting are in accordance with the Developer Certificate of Origin. To sign-off on your commit, you can use the -s flag when you commit changes.
git commit -s -m \"Your commit message\"\n This will add a string like this to the end of your commit message:
\"Signed-off-by: Your Name <your-email@address>\"\n NOTE: Sign-off is not the same thing as signing your commits with a private key. Both operations use a similar flag, which can be confusing. The one you want is the lowercase -s \ud83d\ude42
When you're ready to submit your changes for review, open a Pull Request back to the upstream repository. When you open your pull request, the maintainers will automatically be notified. Additionally, a series of automated checks will be performed on your code to make sure it passes certain repository specific requirements.
Maintainers may have suggestions on things to improve in your pull request. It is our goal to get code that is beneficial to the project merged as quickly as possible, so we don't like to leave pull requests hanging around for a long time. If the project maintainers are satisfied with the changes, they will approve and merge the pull request.
Thanks for your interest in collaborating on this project!
"},{"location":"contributors/#inclusivity","title":"Inclusivity","text":"The Hyperledger Foundation and the FireFly project are committed to fostering a community that is welcoming to all people. When participating in community discussions, contributing code, or documentaiton, please abide by the following guidelines:
This page details some of the more advanced options of the FireFly CLI
"},{"location":"contributors/advanced_cli_usage/#understanding-how-the-cli-uses-firefly-releases","title":"Understanding how the CLI uses FireFly releases","text":""},{"location":"contributors/advanced_cli_usage/#the-manifestjson-file","title":"Themanifest.json file","text":"FireFly has a manifest.json file in the root of the repo. This file contains a list of versions (both tag and sha) for each of the microservices that should be used with this specific commit.
Here is an example of what the manifest.json looks like:
{\n \"ethconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-ethconnect\",\n \"tag\": \"v3.0.4\",\n \"sha\": \"0b7ce0fb175b5910f401ff576ced809fe6f0b83894277c1cc86a73a2d61c6f41\"\n },\n \"fabconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-fabconnect\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"a79a4c66b0a2551d5122d019c15c6426e8cdadd6566ce3cbcb36e008fb7861ca\"\n },\n \"dataexchange-https\": {\n \"image\": \"ghcr.io/hyperledger/firefly-dataexchange-https\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"0de5b1db891a02871505ba5e0507821416d9fa93c96ccb4b1ba2fac45eb37214\"\n },\n \"tokens-erc1155\": {\n \"image\": \"ghcr.io/hyperledger/firefly-tokens-erc1155\",\n \"tag\": \"v0.9.0-20211019-01\",\n \"sha\": \"aabc6c483db408896838329dab5f4b9e3c16d1e9fa9fffdb7e1ff05b7b2bbdd4\"\n }\n}\n"},{"location":"contributors/advanced_cli_usage/#default-cli-behavior-for-releases","title":"Default CLI behavior for releases","text":"When creating a new stack, the CLI will by default, check the latest non-pre-release version of FireFly and look at its manifest.json file that was part of that commit. It will then use the Docker images referenced in that file to determine which images it should pull for the new stack. The specific image tag and sha is written to the docker-compose.yml file for that stack, so restarting or resetting a stack will never pull a newer image.
If you need to run some other version that is not the latest release of FireFly, you can tell the FireFly CLI which release to use by using the --release or -r flag. For example, to explicitly use v0.9.0 run this command to initialize the stack:
ff init -r v0.9.0\n"},{"location":"contributors/advanced_cli_usage/#running-an-unreleased-version-of-one-or-more-services","title":"Running an unreleased version of one or more services","text":"If you need to run an unreleased version of FireFly or one of its microservices, you can point the CLI to a specific manifest.json on your local disk. To do this, use the --manifest or -m flag. For example, if you have a file at ~/firefly/manifest.json:
ff init -m ~/firefly/manifest.json\n If you need to test a locally built docker image of a specific service, you'll want to edit the manifest.json before running ff init. Let's look at an example where we want to run a locally built version of fabconnect. The same steps apply to any of FireFly's microservices.
From the fabconnect project directory, build and tag a new Docker image:
docker build -t ghcr.io/hyperledger/firefly-fabconnect .\n"},{"location":"contributors/advanced_cli_usage/#edit-your-manifestjson-file","title":"Edit your manifest.json file","text":"Next, edit the fabconnect section of the manifest.json file. You'll want to remove the tag and sha and a \"local\": true field, so it looks like this:
...\n \"fabconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-fabconnect\",\n \"local\": true\n },\n...\n"},{"location":"contributors/advanced_cli_usage/#initialize-the-stack-with-the-custom-manifestjson-file","title":"Initialize the stack with the custom manifest.json file","text":" ff init local-test -b fabric -m ~/Code/hyperledger/firefly/manifest.json\n ff start local-test\n If you are iterating on changes locally, you can get the CLI to use an updated image by doing the following:
ff reset <stack_name> and ff start <stack_name> to reset the data, and use the newer imageYou may have noticed that FireFly core is actually not listed in the manifest.json file. If you want to run a locally built image of FireFly Core, you can follow the same steps above, but instead of editing an existing section in the file, we'll add a new one for FireFly.
From the firefly project directory, build and tag a new Docker image:
make docker\n"},{"location":"contributors/advanced_cli_usage/#initialize-the-stack-with-the-custom-manifestjson-file_1","title":"Initialize the stack with the custom manifest.json file","text":" ff init local-test -m ~/Code/hyperledger/firefly/manifest.json\n ff start local-test\n"},{"location":"contributors/code_hierarchy/","title":"FireFly Code Hierarchy","text":"Use the following diagram to better understand the hierarchy amongst the core FireFly components, plugins and utility frameworks:
\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 cmd \u251c\u2500\u2500\u2524 firefly [Ff]\u2502 - CLI entry point\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 - Creates parent context\n \u2502 \u2502 - Signal handling\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - HTTP listener (Gorilla mux)\n\u2502 internal \u251c\u2500\u2500\u2524 api [As]\u2502 * TLS (SSL), CORS configuration etc.\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 server \u2502 * WS upgrade on same port\n \u2502 \u2502 - REST route definitions\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Simple routing logic only, all processing deferred to orchestrator\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - REST route definition framework\n \u2502 openapi [Oa]\u2502 * Standardizes Body, Path, Query, Filter semantics\n \u2502 spec | - OpenAPI 3.0 (Swagger) generation\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Including Swagger. UI\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - WebSocket server\n \u2502 [Ws]\u2502 * Developer friendly JSON based protocol business app development\n \u2502 websockets \u2502 * Reliable sequenced delivery\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * _Event interface [Ei] supports lower level integration with other compute frameworks/transports_\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Extension point interface to listen for database change events\n \u2502 admin [Ae]\u2502 * For building microservice extensions to the core that run externally\n \u2502 events | * Used by the Transaction Manager component\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Filtering to specific object types\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Core data types\n \u2502 fftypes [Ft]\u2502 * Used for API and Serialization\n \u2502 \u2502 * APIs can mask fields on input via router definition\n \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Core runtime server. Initializes and owns instances of:\n \u2502 [Or]\u2502 * Components: Implement features\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2524 orchestrator \u2502 * Plugins: Pluggable infrastructure services\n \u2502 \u2502 \u2502 \u2502 - Exposes actions to router\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Processing starts here for all API calls\n \u2502 \u2502\n \u2502 Components: Components do the heavy lifting within the engine\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Integrates with Blockchain Smart Contract logic across blockchain technologies\n \u2502 \u251c\u2500\u2500\u2500\u2524 contract [Cm]\u2502 * Generates OpenAPI 3 / Swagger definitions for smart contracts, and propagates to network\n \u2502 \u2502 \u2502 manager \u2502 * Manages listeners for native Blockchain events, and routes those to application events\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Convert to/from native Blockchain interfaces (ABI etc.) and FireFly Interface [FFI] format\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Maintains a view of the entire network\n \u2502 \u251c\u2500\u2500\u2500\u2524 network [Nm]\u2502 * Integrates with network permissioning [NP] plugin\n \u2502 \u2502 \u2502 map \u2502 * Integrates with broadcast plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Handles hierarchy of member identity, node identity and signing identity\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Broadcast of data to all parties in the network\n \u2502 \u251c\u2500\u2500\u2500\u2524 broadcast [Bm]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 manager | * Integrates with shared storage interface [Ss] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Send private data to individual parties in the network\n \u2502 \u251c\u2500\u2500\u2500\u2524 private [Pm]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 messaging | * Integrates with the data exchange [Dx] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Messages can be pinned and sequenced via the blockchain, or just sent\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Groups of parties, with isolated data and/or blockchains\n \u2502 \u2502 \u2502 group [Gm]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 manager \u2502 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Private data management and validation\n \u2502 \u251c\u2500\u2500\u2500\u2524 data [Dm]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 manager \u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - JSON data schema management and validation (architecture extensible to XML and more)\n \u2502 \u2502 \u2502 json [Jv]\u2502 * JSON Schema validation logic for outbound and inbound messages\n \u2502 \u2502 \u2502 validator \u2502 * Schema propagation\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with broadcast plugin\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Binary data addressable via ID or Hash\n \u2502 \u2502 \u2502 blobstore [Bs]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 \u2502 * Hashes data, and maintains mapping to payload references in blob storage\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Download from shared storage\n \u2502 \u251c\u2500\u2500\u2500\u2524 shared [Sd]\u2502 * Parallel asynchronous download\n \u2502 \u2502 \u2502 download \u2502 * Resilient retry and crash recovery\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Notification to event aggregator on completion\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u251c\u2500\u2500\u2500\u2524 identity [Im] \u2502 - Central identity management service across components\n \u2502 \u2502 \u2502 manager \u2502 * Resolves API input identity + key combos (short names, formatting etc.)\n \u2502 \u2502 \u2502 \u2502 * Resolves registered on-chain signing keys back to identities\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with Blockchain Interface and pluggable Identity Interface (TBD)\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Keeps track of all operations performed against external components via plugins\n \u2502 \u251c\u2500\u2500\u2500\u2524 operation [Om]\u2502 * Updates database with inputs/outputs\n \u2502 \u2502 \u2502 manager \u2502 * Provides consistent retry semantics across plugins\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Private data management and validation\n \u2502 \u251c\u2500\u2500\u2500\u2524 event [Em]\u2502 * Implements dispatcher for batch component\n \u2502 \u2502 \u2502 manager \u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Handles incoming external data\n \u2502 \u2502 \u2502 [Ag]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 aggregator \u2502 * Integrates with shared storage interface [Ss] plugin\n \u2502 \u2502 \u2502 \u2502 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2502 \u2502 - Ensures valid events are dispatched only once all data is available\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Context aware, to prevent block-the-world scenarios\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Subscription manager\n \u2502 \u2502 \u2502 [Sm]\u2502 * Creation and management of subscriptions\n \u2502 \u2502 \u2502 subscription \u2502 * Creation and management of subscriptions\n \u2502 \u2502 \u2502 manager \u2502 * Message to Event matching logic\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Manages delivery of events to connected applications\n \u2502 \u2502 \u2502 event [Ed]\u2502 * Integrates with data exchange [Dx] plugin\n \u2502 \u2502 \u2502 dispatcher \u2502 * Integrates with blockchain interface [Bi] plugin\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Token creation/transfer initiation, indexing and coordination\n \u2502 \u251c\u2500\u2500\u2500\u2524 asset [Am]\u2502 * Fungible tokens: Digitized value/settlement (coins)\n \u2502 \u2502 \u2502 manager \u2502 * Non-fungible tokens: NFTs / globally uniqueness / digital twins\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Full indexing of transaction history\n \u2502 \u2502 [REST/WebSockets]\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\n \u2502 \u2502 \u2502 ERC-20 / ERC-721 \u251c\u2500\u2500\u2500\u2524 ERC-1155 \u251c\u2500\u2500\u2500\u2524 Simple framework for building token connectors\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u251c\u2500\u2500\u2500\u2524 sync / [Sa] \u2502 - Sync/Async Bridge\n \u2502 \u2502 \u2502 async bridge \u2502 * Provides synchronous request/reply APIs\n \u2502 \u2502 \u2502 \u2502 * Translates to underlying event-driven API\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Aggregates messages and data, with rolled up hashes for pinning\n \u2502 \u251c\u2500\u2500\u2500\u2524 batch [Ba]\u2502 * Pluggable dispatchers\n \u2502 \u2502 \u2502 manager \u2502 - Database decoupled from main-line API processing\n \u2502 \u2502 \u2502 \u2502 * See architecture diagrams for more info on active/active sequencing\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 - Manages creation of batch processor instances\n \u2502 \u2502 \u2502\n \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Short lived agent spun up to assemble batches on demand\n \u2502 \u2502 \u2502 batch [Bp]\u2502 * Coupled to an author+type of messages\n \u2502 \u2502 \u2502 processor \u2502 - Builds batches of 100s messages for efficient pinning\n \u2502 \u2502 \u2502 \u2502 * Aggregates messages and data, with rolled up hashes for pinning\n \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 - Shuts down automatically after a configurable inactivity period\n \u2502 ... more TBD\n \u2502\nPlugins: Each plugin comprises a Go shim, plus a remote agent microservice runtime (if required)\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Blockchain Interface\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 [Bi]\u2502 * Transaction submission - including signing key management\n \u2502 \u2502 blockchain \u2502 * Event listening\n \u2502 \u2502 interface \u2502 * Standardized operations, and custom on-chain coupling\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 ethereum \u2502 \u2502 fabric \u2502 \u2502 corda/cordapps \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 [REST/WebSockets]\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\n \u2502 \u2502 transaction manager [Tm] \u251c\u2500\u2500\u2500\u2524 Connector API [ffcapi] \u251c\u2500\u2500\u2500\u2524 Simple framework for building blockchain connectors\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Token interface\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 tokens [Ti]\u2502 * Standardizes core concepts: token pools, transfers, approvals\n \u2502 \u2502 interface \u2502 * Pluggable across token standards\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Supports simple implementation of custom token standards via microservice connector\n \u2502 [REST/WebSockets]\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\n \u2502 \u2502 ERC-20 / ERC-721 \u251c\u2500\u2500\u2500\u2524 ERC-1155 \u251c\u2500\u2500\u2500\u2524 Simple framework for building token connectors\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - P2P Content Addresssed Filesystem\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 shared [Si]\u2502 * Payload upload / download\n \u2502 \u2502 storage \u2502 * Payload reference management\n \u2502 \u2502 interface \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to any shared storage sytem, accessible to all members\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 ipfs \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Private Data Exchange\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 data [Dx]\u2502 * Blob storage\n \u2502 \u2502 exchange \u2502 * Private secure messaging\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Secure file transfer\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to any private data exchange tech\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 https / MTLS \u2502 \u2502 Kaleido \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - API Authentication and Authorization Interface\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 api auth [Aa]\u2502 * Authenticates security credentials (OpenID Connect id token JWTs etc.)\n \u2502 \u2502 \u2502 * Extracts API/user identity (for identity interface to map)\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Enforcement point for fine grained API access control\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible other single sign-on technologies\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 apikey \u2502 \u2502 jwt \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Database Interactions\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 database [Di]\u2502 * Create, Read, Update, Delete (CRUD) actions\n \u2502 \u2502 interace \u2502 * Filtering and update definition interace\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Migrations and Indexes\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to NoSQL (CouchDB / MongoDB etc.)\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 sqlcommon \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible other SQL databases\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 postgres \u2502 \u2502 sqlite3 \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Connects the core event engine to external frameworks and applications\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 event [Ei]\u2502 * Supports long-lived (durable) and ephemeral event subscriptions\n \u2502 \u2502 interface \u2502 * Batching, filtering, all handled in core prior to transport\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Interface supports connect-in (websocket) and connect-out (broker runtime style) plugins\n \u2502 \u2502\n \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 ... extensible to additional event buses (Kafka, NATS, AMQP etc.)\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 websockets \u2502 \u2502 webhooks \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 ... more TBD\n\n Additional utility framworks\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - REST API client\n \u2502 rest [Re]\u2502 * Provides convenience and logging\n \u2502 client \u2502 * Standardizes auth, config and retry logic\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Built on Resty\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - WebSocket client\n \u2502 wsclient [Wc]\u2502 * Provides convenience and logging\n \u2502 \u2502 * Standardizes auth, config and reconnect logic\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Built on Gorilla WebSockets\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Translation framework\n \u2502 i18n [In]\u2502 * Every translations must be added to `en_translations.json` - with an `FF10101` key\n \u2502 \u2502 * Errors are wrapped, providing extra features from the `errors` package (stack etc.)\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Description translations also supported, such as OpenAPI description\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Logging framework\n \u2502 log [Lo]\u2502 * Logging framework (logrus) integrated with context based tagging\n \u2502 \u2502 * Context is used throughout the code to pass API invocation context, and logging context\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Example: Every API call has an ID that can be traced, as well as a timeout\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 - Configuration\n \u2502 config [Co]\u2502 * File and Environment Variable based logging framework (viper)\n \u2502 \u2502 * Primary config keys all defined centrally\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 * Plugins integrate by returning their config structure for unmarshaling (JSON tags)\n"},{"location":"contributors/code_overview/","title":"FireFly Code Overview","text":""},{"location":"contributors/code_overview/#developer-intro","title":"Developer Intro","text":"FireFly is a second generation implementation re-engineered from the ground up to improve developer experience, runtime performance, and extensibility.
This means a simplified REST/WebSocket programming model for app development, and a wider range of infrastructure options for deployment.
It also means a focus on an architecture and code structure for a vibrant open source community.
A few highlights:
Asset, Data, Message, Event, Topic, TransactionAdded flexibility, with simplified the developer experience:
Versioning of data definitions
Context construct link related events into a single sequence## Directories
FireFly has a plugin based architecture design, with a microservice runtime footprint. As such there are a number of repos, and the list will grow as the community evolves.
But not to worry, one of those repos is a CLI designed to get you running with all the components you need in minutes!
Note only the projects that are primarily built to support FireFly are listed here, not all of the ecosystem of projects that integrate underneath the plugins.
"},{"location":"contributors/dev_environment_setup/","title":"Setting up a FireFly Core Development Environment","text":"This guide will walk you through setting up your machine for contributing to FireFly, specifically the FireFly core.
"},{"location":"contributors/dev_environment_setup/#dependencies","title":"Dependencies","text":"You will need a few prerequisites set up on your machine before you can build FireFly from source. We recommend doing development on macOS, Linux, or WSL 2.0.
The first step to setting up a local development environment is to install the FireFly CLI. Please section of the Getting Started Guide to install The FireFly CLI.
"},{"location":"contributors/dev_environment_setup/#installing-go-and-setting-up-your-gopath","title":"Installing Go and setting up yourGOPATH","text":"We recommend following the instructions on golang.org to install Go, rather than installing Go from another package magager such as brew. Although it is possible to install Go any way you'd like, setting up your GOPATH may differ from the following instructions.
After installing Go, you will need to add a few environment variables to your shell run commands file. This is usually a hidden file in your home directory called .bashrc or .zshrc, depending on which shell you're using.
Add the following lines to your .bashrc or .zshrc file:
export GOPATH=$HOME/go\nexport GOROOT=\"/usr/local/go\"\nexport PATH=\"$PATH:${GOPATH}/bin:${GOROOT}/bin\"\n"},{"location":"contributors/dev_environment_setup/#building-firefly","title":"Building FireFly","text":"After installing dependencies, building FireFly from source is very easy. Just clone the repo:
git clone git@github.com:hyperledger/firefly.git && cd firefly\n And run the Makefile to run tests, and compile the app
make\n If you want to install the binary on your path (assuming your Go Home is already on your path), from inside the project directory you can simply run:
go install\n"},{"location":"contributors/dev_environment_setup/#install-the-cli","title":"Install the CLI","text":"Please check the CLI Installation instructions for the best way to install the CLI on your machine: https://github.com/hyperledger/firefly-cli#install-the-cli
"},{"location":"contributors/dev_environment_setup/#set-up-a-development-stack","title":"Set up a development stack","text":"Now that you have both FireFly and the FireFly CLI installed, it's time to create a development stack. The CLI can be used to create a docker-compose environment that runs the entirety of a FireFly network. This will include several different processes for each member of the network. This is very useful for people that want to build apps that use FireFly's API. It can also be useful if you want to make changes to FireFly itself, however we need to set up the stack slightly differently in that case.
Essentially what we are going to do is have docker-compose run everything in the FireFly network except one FireFly core process. We'll run this FireFly core process on our host machine, and configure it to connect to the rest of the microservices running in docker-compose. This means we could launch FireFly from Visual Studio Code or some other IDE and use a debugger to see what's going on inside FireFly as it's running.
We'll call this stack dev. We're also going to add --external 1 to the end of our command to create the new stack:
ff init dev --external 1\n This tells the CLI that we want to manage one of the FireFly core processes outside the docker-compose stack. For convenience, the CLI will still generate a config file for this process though.
"},{"location":"contributors/dev_environment_setup/#start-the-stack","title":"Start the stack","text":"To start your new stack simply run:
ff start dev\n At a certain point in the startup process, the CLI will pause and wait for up to two minutes for you to start the other FireFly node. There are two different ways you can run the external FireFly core process.
"},{"location":"contributors/dev_environment_setup/#1-from-another-terminal","title":"1) From another terminal","text":"The CLI will print out the command line which can be copied and pasted into another terminal window to run FireFly. This command should be run from the firefly core project directory. Here is an example of the command that the CLI will tell you to run:
firefly -f ~/.firefly/stacks/dev/runtime/config/firefly_core_0.yml\n NOTE: The first time you run FireFly with a fresh database, it will need a directory of database migrations to apply to the empty database. If you run FireFly from the firefly project directory you cloned from GitHub, it will automatically find these and apply them. If you run it from some other directory, you will have to point FireFly to the migrations on your own.
If you named your stack dev there is a launch.json file for Visual Studio code already in the project directory. If you have the project open in Visual Studio Code, you can either press the F5 key to run it, or go to the \"Run and Debug\" view in Visual Studio code, and click \"Run FireFly Core\".
Now you should have a full FireFly stack up and running, and be able to debug FireFly using your IDE. Happy hacking!
NOTE: Because firefly-ui is a separate repo, unless you also start a UI dev server for the external FireFly core, the default UI path will not load. This is expected, and if you're just working on FireFly core itself, you don't need to worry about it.`
Refer to Advanced CLI Usage.
"},{"location":"contributors/docs_setup/","title":"Contributing to Documentation","text":"This guide will walk you through setting up your machine for contributing to FireFly documentation. Documentation contributions are extremely valuable. If you discover something is missing in the docs, we would love to include your additions or clarifications to help the next person who has the same question.
This doc site is generated by a set of Markdown files in the main FireFly repository, under the ./doc-site directory. You can browse the source for the current live site in GitHub here: https://github.com/hyperledger/firefly/tree/main/doc-site
The process for updating the documentation is really easy! You'll follow the same basic steps outlined in the same steps outlined in the Contributor's guide. Here are the detailed steps for contributing to the docs:
git commit -s!This FireFly documentation site is based on the Hyperledger documentation template. The template utilizes MkDocs (documentation at mkdocs.org) and the theme Material for MkDocs (documentation at Material for MkDocs). Material adds a number of extra features to MkDocs, and Hyperledger repositories can take advantage of the theme's Insiders capabilities.
"},{"location":"contributors/docs_setup/#prerequisites","title":"Prerequisites","text":"To test the documents and update the published site, the following tools are needed:
mkdocs.yml file and needed for deploying the site to gh-pages.git can be installed locally, as described in the Install Git Guide from GitHub.
Python 3 can be installed locally, as described in the Python Getting Started guide.
It is recommended to install your Python dependencies in a virtual environment in case you have other conflicting Python installations on your machine. This also removes the need to install these packages globally on your computer.
cd doc-site\npython3 -m venv venv\nsource venv/bin/activate\n"},{"location":"contributors/docs_setup/#mkdocs","title":"Mkdocs","text":"The Mkdocs-related items can be installed locally, as described in the Material for Mkdocs installation instructions. The short, case-specific version of those instructions follow:
pip install -r requirements.txt\n"},{"location":"contributors/docs_setup/#verify-setup","title":"Verify Setup","text":"To verify your setup, check that you can run mkdocs by running the command mkdocs --help to see the help text.
The commands you will usually use with mkdocs are:
mkdocs serve - Start the live-reloading docs server.mkdocs build - Build the documentation site.mkdocs -h - Print help message and exit.mkdocs.yml # The configuration file.\ndocs/\n index.md # The documentation homepage.\n SUMMARY.md # The main left nav\n ... # Other markdown pages, images and other files.\n"},{"location":"contributors/release_guide/","title":"Release Guide","text":"This guide will walk you through creating a release.
"},{"location":"contributors/release_guide/#versioning-scheme","title":"Versioning scheme","text":"FireFly follows semantic versioning. For more details on how we determine which version to use please see the Versioning Scheme guide.
"},{"location":"contributors/release_guide/#the-manifestjson-file","title":"Themanifest.json file","text":"FireFly has a manifest.json file in the root of the repo. This file contains a list of versions (both tag and sha) for each of the microservices that should be used with this specific commit. If you need FireFly to use a newer version of a microservice listed in this file, you should update the manifest.json file, commit it, and include it in your PR. This will trigger an end-to-end test run, using the specified versions.
Here is an example of what the manifest.json looks like:
{\n \"ethconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-ethconnect\",\n \"tag\": \"v3.0.4\",\n \"sha\": \"0b7ce0fb175b5910f401ff576ced809fe6f0b83894277c1cc86a73a2d61c6f41\"\n },\n \"fabconnect\": {\n \"image\": \"ghcr.io/hyperledger/firefly-fabconnect\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"a79a4c66b0a2551d5122d019c15c6426e8cdadd6566ce3cbcb36e008fb7861ca\"\n },\n \"dataexchange-https\": {\n \"image\": \"ghcr.io/hyperledger/firefly-dataexchange-https\",\n \"tag\": \"v0.9.0\",\n \"sha\": \"0de5b1db891a02871505ba5e0507821416d9fa93c96ccb4b1ba2fac45eb37214\"\n },\n \"tokens-erc1155\": {\n \"image\": \"ghcr.io/hyperledger/firefly-tokens-erc1155\",\n \"tag\": \"v0.9.0-20211019-01\",\n \"sha\": \"aabc6c483db408896838329dab5f4b9e3c16d1e9fa9fffdb7e1ff05b7b2bbdd4\"\n }\n}\n NOTE: You can run make manifest in the FireFly core source directory, and a script will run to automatically get the latests non-pre-release version of each of FireFly's microservices. If you need to use a snapshot or pre-release version you should edit manifest.json file manually, as this script will not fetch those versions.
Releases and builds are managed by GitHub. New binaries and/or Docker images will automatically be created when a new release is published. The easiest way to create a release is through the web UI for the repo that you wish to release.
"},{"location":"contributors/release_guide/#1-navigate-to-the-release-page-for-the-repo","title":"1) Navigate to the release page for the repo","text":""},{"location":"contributors/release_guide/#2-click-the-draft-a-new-release-button","title":"2) Click theDraft a new release button","text":""},{"location":"contributors/release_guide/#3-fill-out-the-form-for-your-release","title":"3) Fill out the form for your release","text":"It is recommended to start with the auto-generated release notes. Additional notes can be added as-needed.
"},{"location":"contributors/release_guide/#automatic-docker-builds","title":"Automatic Docker builds","text":"After cutting a new release, a GitHub Action will automatically start a new Docker build, if the repo has a Docker image associated with it. You can check the status of the build by clicking the \"Actions\" tab along the top of the page, for that repo.
"},{"location":"contributors/version_scheme/","title":"Versioning Scheme","text":"This page describes FireFly's versioning scheme
"},{"location":"contributors/version_scheme/#semantic-versioning","title":"Semantic versioning","text":"FireFly follows semantic versioning. In summary, this means:
Given a version number MAJOR.MINOR.PATCH, increment the:
When creating a new release, the release name and tag should be the semantic version should be prefixed with a v . For example, a certain release name/tag could be v0.9.0.
For pre-release versions for testing, we append a date and index to the end of the most recently released version. For example, if we needed to create a pre-release based on v0.9.0 and today's date is October 22, 2021, the version name/tag would be: v0.9.0-20211022-01. If for some reason you needed to create another pre-release version in the same day (hey, stuff happens), the name/tag for that one would be v0.9.0-20211022-02.
For pre-releases that are candidates to become a new major or minor release, the release name/tag will be based on the release that the candidate will become (as opposed to the test releases above, which are based on the previous release). For example, if the current latest release is v0.9.0 but we want to create an alpha release for 1.0, the release name/tag would be v1.0.0-alpha.1.
Find answers to the most commonly asked FireFly questions.
"},{"location":"faqs/#how-does-firefly-enable-multi-chain-applications","title":"How does FireFly enable multi-chain applications?","text":"It's best to think about FireFly as a rich orchestration layer that sits one layer above the blockchain. FireFly helps to abstract away much of the complex blockchain functionality (such as data exchange, private messaging, common token functionality, etc) in a loosely coupled microservice architecture with highly pluggable components. This enables application developers to focus on building innovative Web3 applications.
There aren't any out of the box bridges to connect two separate chains together, but with a collection of FireFly instances across a consortium, FireFly could help listen for events on Blockchain A and take an action on Blockchain B when certain conditions are met.
"},{"location":"faqs/#how-do-i-deploy-smart-contracts","title":"\ud83d\udcdc How do I deploy smart contracts?","text":"The recommended way to deploy smart contracts on Ethereum chains is by using FireFly's built in API. For a step by step example of how to do this you can refer to the Smart Contract Tutorial for Ethereum based chains.
For Fabric networks, please refer to the Fabric chaincode lifecycle docs for detailed instructions on how to deploy and manage Fabric chaincode.
"},{"location":"faqs/#can-i-connect-firefly-to-metamask","title":"\ud83e\udd8a Can I connect FireFly to MetaMask?","text":"Yes! Before you set up MetaMask you'll likely want to create some tokens that you can use to send between wallets on your FF network. Go to the tokens tab in your FireFly node's UI, create a token pool, and then mint some tokens. Once you've done this, follow the steps listed here to set up MetaMask on your network.
"},{"location":"faqs/#connect-with-us-on-discord","title":"\ud83d\ude80 Connect with us on Discord","text":"If your question isn't answered here or if you have immediate questions please don't hesitate to reach out to us on Discord in the firefly channel:
If you're new to FireFly, this is the perfect place to start! With the FireFly CLI and the FireFly Sandbox it's really easy to get started building powerful blockchain apps. Just follow along with the steps below and you'll be up and running in no time!
"},{"location":"gettingstarted/#what-you-will-accomplish-with-this-guide","title":"What you will accomplish with this guide","text":"
With this easy-to-follow guide, you'll go from \"zero\" to blockchain-hero in the time it takes to drink a single cup of coffee. It will walk you through setting up your machine, all the way through sending your first blockchain transactions using the FireFly Sandbox.
"},{"location":"gettingstarted/#were-here-to-help","title":"We're here to help!","text":"We want to make it as easy as possible for anyone to get started with FireFly, and we don't want anyone to feel like they're stuck. If you're having trouble, or are just curious about what else you can do with FireFly we encourage you to join the Hyperledger Discord server and come chat with us in the #firefly channel.
"},{"location":"gettingstarted/#get-started-install-the-firefly-cli","title":"Get started: Install the FireFly CLI","text":"Now that you've got the FireFly CLI set up on your machine, the next step is to create and start a FireFly stack.
\u2460 Install the FireFly CLI \u2192
"},{"location":"gettingstarted/firefly_cli/","title":"Install the FireFly CLI","text":""},{"location":"gettingstarted/firefly_cli/#prerequisites","title":"Prerequisites","text":"In order to run the FireFly CLI, you will need a few things installed on your dev machine:
NOTE: For Linux users, it is recommended that you add your user to the docker group so that you do not have to run ff or docker as root or with sudo. For more information about Docker permissions on Linux, please see Docker's documentation on the topic.
NOTE: For Windows users, we recommend that you use Windows Subsystem for Linux 2 (WSL2). Binaries provided for Linux will work in this environment.
"},{"location":"gettingstarted/firefly_cli/#install-the-cli","title":"Install the CLI","text":"There are several ways to install the FireFly CLI. The easiest way to get up and running with the FireFly CLI is to download a pre-compiled binary of the latest release.
"},{"location":"gettingstarted/firefly_cli/#install-via-binary-package-download","title":"Install via Binary Package Download","text":"Download the package for your OS by navigating to the latest release page and downloading the appropriate package for your OS and architecture.
"},{"location":"gettingstarted/firefly_cli/#unpack-and-install-the-binary","title":"Unpack and Install the Binary","text":"Assuming you downloaded the package from GitHub into your Downloads directory, run the following command to extract the binary and move it to your system path:
sudo tar -zxf ~/Downloads/firefly-cli_*.tar.gz -C /usr/local/bin ff && rm ~/Downloads/firefly-cli_*.tar.gz\n If you downloaded the package into a different directory, adjust the command to point to the correct location of the firefly-cli_*.tar.gz file.
NOTE: On recent versions of macOS, default security settings will prevent the FireFly CLI binary from running, because it was downloaded from the internet. You will need to allow the FireFly CLI in System Preferences, before it will run.
"},{"location":"gettingstarted/firefly_cli/#install-via-homebrew-macos","title":"Install via Homebrew (macOS)","text":"You can also install the FireFly CLI using Homebrew:
brew install firefly\n"},{"location":"gettingstarted/firefly_cli/#alternative-installation-method-install-via-go","title":"Alternative installation method: Install via Go","text":"If you have a local Go development environment, and you have included ${GOPATH}/bin in your path, you could also use Go to install the FireFly CLI by running:
go install github.com/hyperledger/firefly-cli/ff@latest\n"},{"location":"gettingstarted/firefly_cli/#verify-the-installation","title":"Verify the installation","text":"After using either installation method above, you can verify that the CLI is successfully installed by running ff version. This should print the current version like this:
{\n \"Version\": \"v0.0.47\",\n \"License\": \"Apache-2.0\"\n}\n"},{"location":"gettingstarted/firefly_cli/#next-steps-start-your-environment","title":"Next steps: Start your environment","text":"Now that you've got the FireFly CLI set up on your machine, the next step is to create and start a FireFly stack.
\u2461 Start your environment \u2192
"},{"location":"gettingstarted/sandbox/","title":"Use the Sandbox","text":""},{"location":"gettingstarted/sandbox/#previous-steps-start-your-environment","title":"Previous steps: Start your environment","text":"If you haven't started a FireFly stack already, please go back to the previous step and read the guide on how to Start your environment.
\u2190 \u2461 Start your environment
Now that you have a full network of three Supernodes running on your machine, let's look at the first two components that you will interact with: the FireFly Sandbox and the FireFly Explorer.
"},{"location":"gettingstarted/sandbox/#video-walkthrough","title":"Video walkthrough","text":"This video is a walkthrough of the FireFly Sandbox and FireFly Explorer from the FireFly 1.0 launch webinar. At this point you should be able to follow along and try all these same things on your own machine.
"},{"location":"gettingstarted/sandbox/#open-the-firefly-sandbox-for-the-first-member","title":"Open the FireFly Sandbox for the first member","text":"When you set up your FireFly stack in the previous section, it should have printed some URLs like the following. Open the link in a browser for the `Sandbox UI for member '0'. It should be: http://127.0.0.1:5109
ff start dev\nthis will take a few seconds longer since this is the first time you're running this stack...\ndone\n\nWeb UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\nWeb UI for member '1': http://127.0.0.1:5001/ui\nSandbox UI for member '1': http://127.0.0.1:5209\n\nWeb UI for member '2': http://127.0.0.1:5002/ui\nSandbox UI for member '2': http://127.0.0.1:5309\n\n\nTo see logs for your stack run:\n\nff logs dev\n"},{"location":"gettingstarted/sandbox/#sandbox-layout","title":"Sandbox Layout","text":"The Sandbox is split up into three columns:
"},{"location":"gettingstarted/sandbox/#left-column-prepare-your-request","title":"Left column: Prepare your request","text":"On the left-hand side of the page, you can fill out simple form fields to construct messages and more. Some tabs have more types of requests on them in sections that can be expanded or collapsed. Across the top of this column there are three tabs that switch between the three main sets of functionality in the Sandbox. The next three sections of this guide will walk you through each one of these.
The first tab we will explore is the MESSAGING tab. This is where we can send broadcasts and private messages.
"},{"location":"gettingstarted/sandbox/#middle-column-preview-server-code-and-see-response","title":"Middle column: Preview server code and see response","text":"As you type in the form on the left side of the page, you may notice that the source code in the top middle of the page updates automatically. If you were building a backend app, this is an example of code that your app could use to call the FireFly SDK. The middle column also contains a RUN button to actually send the request.
On the right-hand side of the page you can see a stream of events being received on a WebSocket connection that the backend has open to FireFly. For example, as you make requests to send messages, you can see when the messages are asynchronously confirmed.
"},{"location":"gettingstarted/sandbox/#messages","title":"Messages","text":"The Messages tab is where we can send broadcast and private messages to other members and nodes in the FireFly network. Messages can be a string, any arbitrary JSON object, or a binary file. For more details, please see the tutorial on Broadcasting data and Privately sending data.
"},{"location":"gettingstarted/sandbox/#things-to-try-out","title":"Things to try out","text":"The Tokens tab is where you can create token pools, and mint, burn, or transfer tokens. This works with both fungible and non-fungible tokens (NFTs). For more details, please see the Tokens tutorials.
"},{"location":"gettingstarted/sandbox/#things-to-try-out_1","title":"Things to try out","text":"The Contracts section of the Sandbox lets you interact with custom smart contracts, right from your web browser! The Sandbox also provides some helpful tips on deploying your smart contract to the blockchain. For more details, please see the tutorial on Working with custom smart contracts.
"},{"location":"gettingstarted/sandbox/#things-to-try-out_2","title":"Things to try out","text":"At this point you should have a pretty good understanding of some of the major features of Hyperledger FireFly. Now, using what you've learned, you can go and build your own Web3 app! Don't forget to join the Hyperledger Discord server and come chat with us in the #firefly channel.
"},{"location":"gettingstarted/setup_env/","title":"Start your environment","text":""},{"location":"gettingstarted/setup_env/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the previous step and read the guide on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
Now that you have the FireFly CLI installed, you are ready to run some Supernodes on your machine!
"},{"location":"gettingstarted/setup_env/#a-firefly-stack","title":"A FireFly Stack","text":"A FireFly stack is a collection of Supernodes with networking and configuration that are designed to work together on a single development machine. A stack has multiple members (also referred to organizations). Each member has their own Supernode within the stack. This allows developers to build and test data flows with a mix of public and private data between various parties, all within a single development environment.
The stack also contains an instance of the FireFly Sandbox for each member. This is an example of an end-user application that uses FireFly's API. It has a backend and a frontend which are designed to walk developers through the features of FireFly, and provides code snippets as examples of how to build those features into their own application. The next section in this guide will walk you through using the Sandbox.
"},{"location":"gettingstarted/setup_env/#system-resources","title":"System Resources","text":"The FireFly stack will run in a docker-compose project. For systems that run Docker containers inside a virtual machine, like macOS, you need to make sure that you've allocated enough memory to the Docker virtual machine. We recommend allocating 1GB per member. In this case, we're going to set up a stack with 3 members, so please make sure you have at least 3 GB of RAM allocated in your Docker Desktop settings.
It's really easy to create a new FireFly stack. The ff init command can create a new stack for you, and will prompt you for a few details such as the name, and how many members you want in your stack.
To create an Ethereum based stack, run:
ff init ethereum\n To create an Fabric based stack, run:
ff init fabric\n Choose a stack name. For this guide, I will choose the name dev, but you can pick whatever you want:
stack name: dev\n Chose the number of members for your stack. For this guide, we should pick 3 members, so we can try out both public and private messaging use cases:
number of members: 3\n "},{"location":"gettingstarted/setup_env/#stack-initialization-options","title":"Stack initialization options","text":"There are quite a few options that you can choose from when creating a new stack. For now, we'll just stick with the defaults. To see the full list of Ethereum options, just run ff init ethereum --help or to see the full list of Fabric options run ff init fabric --help
ff init ethereum --help\nCreate a new FireFly local dev stack using an Ethereum blockchain\n\nUsage:\n ff init ethereum [stack_name] [member_count] [flags]\n\nFlags:\n --block-period int Block period in seconds. Default is variable based on selected blockchain provider. (default -1)\n -c, --blockchain-connector string Blockchain connector to use. Options are: [evmconnect ethconnect] (default \"evmconnect\")\n -n, --blockchain-node string Blockchain node type to use. Options are: [geth besu remote-rpc] (default \"geth\")\n --chain-id int The chain ID - also used as the network ID (default 2021)\n --contract-address string Do not automatically deploy a contract, instead use a pre-configured address\n -h, --help help for ethereum\n --remote-node-url string For cases where the node is pre-existing and running remotely\n\nGlobal Flags:\n --ansi string control when to print ANSI control characters (\"never\"|\"always\"|\"auto\") (default \"auto\")\n --channel string Select the FireFly release channel to use. Options are: [stable head alpha beta rc] (default \"stable\")\n --connector-config string The path to a yaml file containing extra config for the blockchain connector\n --core-config string The path to a yaml file containing extra config for FireFly Core\n -d, --database string Database type to use. Options are: [sqlite3 postgres] (default \"sqlite3\")\n -e, --external int Manage a number of FireFly core processes outside of the docker-compose stack - useful for development and debugging\n -p, --firefly-base-port int Mapped port base of FireFly core API (1 added for each member) (default 5000)\n --ipfs-mode string Set the mode in which IPFS operates. Options are: [private public] (default \"private\")\n -m, --manifest string Path to a manifest.json file containing the versions of each FireFly microservice to use. Overrides the --release flag.\n --multiparty Enable or disable multiparty mode (default true)\n --node-name stringArray Node name\n --org-name stringArray Organization name\n --prometheus-enabled Enables Prometheus metrics exposition and aggregation to a shared Prometheus server\n --prometheus-port int Port for the shared Prometheus server (default 9090)\n --prompt-names Prompt for org and node names instead of using the defaults\n -r, --release string Select the FireFly release version to use. Options are: [stable head alpha beta rc] (default \"latest\")\n --request-timeout int Custom request timeout (in seconds) - useful for registration to public chains\n --sandbox-enabled Enables the FireFly Sandbox to be started with your FireFly stack (default true)\n -s, --services-base-port int Mapped port base of services (100 added for each member) (default 5100)\n -t, --token-providers stringArray Token providers to use. Options are: [none erc1155 erc20_erc721] (default [erc20_erc721])\n -v, --verbose verbose log output\n"},{"location":"gettingstarted/setup_env/#start-your-stack","title":"Start your stack","text":"To start your stack simply run:
ff start dev\n This may take a minute or two and in the background the FireFly CLI will do the following for you:
BatchPin smart contractERC-1155 token smart contractNOTE: For macOS users, the default port (5000) is already in-use by ControlCe service (AirPlay Receiver). You can either disable this service in your environment, or use a different port when creating your stack (e.g. ff init dev -p 8000)
After your stack finishes starting it will print out the links to each member's UI and the Sandbox for that node:
ff start dev\nthis will take a few seconds longer since this is the first time you're running this stack...\ndone\n\nWeb UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\nWeb UI for member '1': http://127.0.0.1:5001/ui\nSandbox UI for member '1': http://127.0.0.1:5209\n\nWeb UI for member '2': http://127.0.0.1:5002/ui\nSandbox UI for member '2': http://127.0.0.1:5309\n\n\nTo see logs for your stack run:\n\nff logs dev\n"},{"location":"gettingstarted/setup_env/#next-steps-use-in-the-sandbox","title":"Next steps: Use in the Sandbox","text":"Now that you have some Supernodes running, it's time to start playing: in the Sandbox!
\u2462 Use the Sandbox \u2192
"},{"location":"overview/gateway_features/","title":"Web3 Gateway Features","text":"Web3 Gateway features allow your FireFly Supernode to connect to any blockchain ecosystem, public or private. When a chain is connected, the FireFly Supernode may invoke custom smart contracts, interact with tokens, and monitor transactions. A single FireFly Supernode is able to have multiple namespaces, or isolated environments, where each namespace is a connection to a different chain.
"},{"location":"overview/gateway_features/#transfer-tokenized-value","title":"Transfer tokenized value","text":"The Digital Asset Features allow you to connect to token economies, in multiple blockchains, using the same infrastructure and signing keys.
The complexities of how each token works, and how each blockchain works, are abstracted away from you by the Hyperledger FireFly Connector Framework.
All of the layers of plumbing required to execute a transaction exactly once on a blockchain, and tracking it through to completion, are part of the stack. Deploy and configure them once in your Web3 gateway, and use them for multiple use cases in your enterprise.
"},{"location":"overview/gateway_features/#invoke-any-other-type-of-smart-contract","title":"Invoke any other type of smart contract","text":"The API Generation features of Hyperledger FireFly, allow you to generate a convenient and reliable REST API for any smart contract logic.
Then you just invoke that contract like you would any other API, with all the features you would expect like an OpenAPI 3.0 specification for the API, and UI explorer.
The same reliable transaction submission framework is used as for token transfers, and you can use Hyperledger FireFly as a high volume staging post for those transactions.
For EVM based chains, these features were significantly enhanced in the new EVMConnect connector introduced in v1.1 of FireFly (superseding EthConnect).
"},{"location":"overview/gateway_features/#index-data-from-the-blockchain","title":"Index data from the blockchain","text":"Blockchain nodes are not designed for efficient querying of historical information. Instead their core function is to provide an ordered ledger of actions+events, along with a consistent world state at any point in time.
This means that almost all user experiences and business APIs need a separate data store, that provides an fast indexed view of the history and current state of the chain.
As an example, you've probably looked at a Block Explorer for a public blockchain on the web. Well, you weren't looking directly at the blockchain node. You were querying an off-chain indexed database, of all the blocks and transactions on that chain. An indexer behind the scenes was listening to the blockchain and synchronizing the off-chain state.
Hyperledger FireFly has a built-in indexer for tokens, that maps every token mint/burn/transfer/approve operation that happens on the the blockchain into the database for fast query. You just specify which tokens you're interested in, and FireFly takes care of the rest.
Additionally, FireFly does the heavy lifting part of indexing for all other types of smart contract event that might occur. It scrapes the blockchain for the events, formats them into easy to consume JSON, and reliably delivers them to your application.
So your application just needs a small bit of code to take those payloads, and insert them into the database with the right database indexes you need to query your data by.
"},{"location":"overview/gateway_features/#reliably-trigger-events-in-your-applications","title":"Reliably trigger events in your applications","text":"One of the most important universal rules about Web3 applications, is that they are event-driven.
No one party in the system can chose to change the state, instead they must submit transactions that get ordered against everyone else's transactions, and only once confirmed through the consensus algorithm are they actioned.
This means the integration into your application and core systems needs to be event-driven too.
The same features that support reliable indexing of the blockchain data, allow reliable triggering of application code, business workflows, and core system integrations.
Learn more about the FireFly Event Bus
"},{"location":"overview/gateway_features/#manage-decentralized-data-nfts-etc","title":"Manage decentralized data (NFTs etc.)","text":"Your blockchain transactions are likely to refer to data that is stored off-chain.
One common example is non-fungible-token (NFT) metadata, images and documents. These are not a good fit for storing directly in any blockchain ledger, so complimentary decentralized technologies like the InterPlanetary File System (IPFS) are used to make the data widely available and resilient outside of the blockchain itself.
As a publisher or consumer of such metadata from decentralized storage, you need to be confident you have your own copy safe. So just like with the blockchain data, Hyperledger FireFly can act as a staging post for this data.
Structured JSON data can be stored, uploaded and downloaded from the FireFly database.
Large image/document/video payloads are handled by the pluggable Data Exchange microservice, which allows you to attach local or cloud storage to manage your copy of the data.
FireFly then provides a standardized API to allow publishing of this data. So configuring a reliable gateway to the decentralized storage tier can be done once, and then accessed from your applications via a single Web3 Gateway.
"},{"location":"overview/gateway_features/#maintain-a-private-address-book","title":"Maintain a private address book","text":"You need to manage your signing keys, and know the signing keys of others you are transacting with. A blockchain address like 0x0742e81393ee79C768e84cF57F1bF314F0f31ECe is not very helpful for this.
So Hyperledger FireFly provides a pluggable identity system, built on the foundation of the Decentralized IDentifier (DID). When in Web3 Gateway Mode these identities are not shared or published, and simply provide you a local address book.
You can associate profile information with the identities, for example correlating them to the identifiers in your own core systems - such as an Identity and Access Management (IAM) system, or Know Your Customer (KYC) database.
Learn more about Hyperledger FireFly Identities
"},{"location":"overview/public_vs_permissioned/","title":"Public and Permissioned Blockchain","text":""},{"location":"overview/public_vs_permissioned/#public-and-permissioned-blockchains","title":"Public and Permissioned Blockchains","text":"A separate choice to the technology for your blockchain, is what combination of blockchain ecosystems you will integrate with.
There are a huge variety of options, and increasingly you might find yourself integrating with multiple ecosystems in your solutions.
A rough (and incomplete) high level classification of the blockchains available is as follows:
The lines are blurring between these categorizations as the technologies and ecosystems evolve.
"},{"location":"overview/public_vs_permissioned/#public-blockchain-variations","title":"Public blockchain variations","text":"For the public Layer 1 and 2 solutions, there are too many subclassifications to go into in detail here:
The thing most consistent across public blockchain technologies, is that the technical decisions are backed by token economics.
Put simply, creating a system where it's more financially rewarding to behave honestly, than it is to subvert and cheat the system.
This means that participation costs, and that the mechanisms needed to reliably get your transactions into these systems are complex. Also that the time it might take to get a transaction onto the chain can be much longer than for a permissioned blockchain, with the potential to have to make a number of adjustments/resubmissions.
The choice of whether to run your own node, or use a managed API, to access these blockchain ecosystems is also a factor in the behavior of the transaction submission and event streaming.
"},{"location":"overview/public_vs_permissioned/#firefly-architecture-for-public-chains","title":"FireFly architecture for public chains","text":"One of the fastest evolving aspects of the Hyperledger FireFly ecosystem, is how it facilitates enterprises to participate in these.
The architecture is summarized as follows:
operation resource within FireFly Core to store and update statenonce assignmentffcapi)This evolution involves a significant refactoring of components used for production solutions in the FireFly Ethconnect microservice since mid 2018. This was summarized in firefly-ethconnect#149, and cumulated in the creation of a new repository in 2022.
You can follow the progress and contribute in this repo: https://github.com/hyperledger/firefly-transaction-manager
"},{"location":"overview/supernode_concept/","title":"Introduction to Hyperledger FireFly","text":""},{"location":"overview/supernode_concept/#your-gateway-to-web3-technologies","title":"Your Gateway to Web3 Technologies","text":"Hyperledger FireFly is an organization's gateway to Web3, including all the blockchain ecosystems that they participate in.
Multiple blockchains, multiple token economies, and multiple business networks.
FireFly is not another blockchain implementation, rather it is a pluggable API Orchestration and Data layer, integrating into all of the different types of decentralized technologies that exist in Web3:
Hyperledger FireFly is a toolkit for building and connecting new full-stack decentralized applications (dapps), as well as integrating your existing core systems to the world of Web3.
It has a runtime engine, and it provides a data layer that synchronizes state from the blockchain and other Web3 technologies. It exposes an API and Event Bus to your business logic, that is reliable, developer friendly and ready for enterprise use.
We call this a Supernode - it sits between the application and the underlying infrastructure nodes, providing layers of additional function.
The concept of a Supernode has evolved over the last decade of enterprise blockchain projects, as developers realized that they need much more than a blockchain node for their projects to be successful.
Without a technology like Hyperledger FireFly, the application layer becomes extremely complex and fragile. Tens of thousands of lines of complex low-level \"plumbing\" / \"middleware\" code is required to integrate the web3 infrastructure into the application. This code provides zero unique business value to the solution, but can consume a huge proportion of the engineering budget and maintenance cost if built bespoke within a solution.
"},{"location":"overview/usage_patterns/","title":"Usage Patterns","text":"There are two modes of usage for Hyperledger Firefly: Web3 Gateway and Multiparty
A single runtime can operate in both of these modes, using different namespaces.
"},{"location":"overview/usage_patterns/#web3-gateway-mode","title":"Web3 Gateway Mode","text":"Web3 Gateway mode lets you interact with any Web3 application, regardless of whether Hyperledger FireFly is being used by other members of your business network.
In this mode you can:
Learn more about Web3 Gateway Mode.
"},{"location":"overview/usage_patterns/#multiparty-mode","title":"Multiparty Mode","text":"Multiparty mode is used to build multi-party systems, with a common application runtime deployed by each enterprise participant.
This allows sophisticated applications to be built, that all use the pluggable APIs of Hyperledger FireFly to achieve end-to-end business value in an enterprise context.
In this mode you can do everything you could do in Web3 Gateway mode, plus:
Learn more about Multiparty Mode.
"},{"location":"overview/key_components/","title":"Key Features","text":"Hyperledger FireFly provides a rich suite of features for building new applications, and connecting existing Web3 ecosystems to your business. In this section we introduce each core pillar of functionality.
"},{"location":"overview/key_components/apps/","title":"Apps","text":""},{"location":"overview/key_components/apps/#apps","title":"Apps","text":"Rapidly accelerating development of applications is a key feature of Hyperledger FireFly.
The toolkit is designed to support the full-stack of applications in the enterprise Web3 ecosystem, not just the Smart Contract layer.
Business logic APIs, back-office system integrations, and web/mobile user experiences are just as important to the overall Web3 use case.
These layers require a different developer skill-set to the on-chain Smart Contracts, and those developers must have the tools they need to work efficiently.
"},{"location":"overview/key_components/apps/#api-gateway","title":"API Gateway","text":"FireFly provides APIs that:
Learn more about deploying APIs for custom smart contracts in this tutorial
"},{"location":"overview/key_components/apps/#event-streams","title":"Event Streams","text":"The reality is that the only programming paradigm that works for a decentralized solutions, is an event-driven one.
All blockchain technologies are for this reason event-driven programming interfaces at their core.
In an overall solution, those on-chain events must be coordinated with off-chain private data transfers, and existing core-systems / human workflows.
This means great event support is a must:
Learn all about the Hyperledger FireFly Event Bus, and event-driven application architecture, in this reference section
"},{"location":"overview/key_components/apps/#api-generation","title":"API Generation","text":"The blockchain is going to be at the heart of your Web3 project. While usually small in overall surface area compared to the lines of code in the traditional application tiers, this kernel of mission-critical code is what makes your solution transformational compared to a centralized / Web 2.0 solution.
Whether the smart contract is hand crafted for your project, an existing contract on a public blockchain, or a built-in pattern of a framework like FireFly - it must be interacted with correctly.
So there can be no room for misinterpretation in the hand-off between the blockchain Smart Contract specialist, familiar with EVM contracts in Solidity/Vyper, Fabric chaincode (or maybe even raw block transition logic in Rust or Go), and the backend / full-stack application developer / core-system integrator.
Well documented APIs are the modern norm for this, and it is no different for blockchain.
This means Hyperledger FireFly provides:
The ability for every component to be pluggable is at the core of Hyperledger FireFly.
A microservices approach is used, combining code plug-points in the core runtime, with API extensibility to remote runtimes implemented in a variety of programming languages.
"},{"location":"overview/key_components/connectors/#extension-points","title":"Extension points","text":"Learn more about the plugin architecture here
"},{"location":"overview/key_components/connectors/#blockchain-connector-framework","title":"Blockchain Connector Framework","text":"The most advanced extension point is for the blockchain layer, where multiple layers of extensibility are provided to support the programming models, and behaviors of different blockchain technologies.
This framework has been proven with technologies as different as EVM based Layer 2 Ethereum Scaling solutions like Polygon, all the way to permissioned Hyperledger Fabric networks.
Check out instructions to connect to a list of remote blockchain networks here.
Find out more about the Blockchain Connector Framework here.
"},{"location":"overview/key_components/digital_assets/","title":"Digital Assets","text":""},{"location":"overview/key_components/digital_assets/#digital-asset-features","title":"Digital asset features","text":"The modelling, transfer and management of digital assets is the core programming foundation of blockchain.
Yet out of the box, raw blockchains designed to efficiently manage these assets in large ecosystems, do not come with all the building blocks needed by applications.
"},{"location":"overview/key_components/digital_assets/#token-api","title":"Token API","text":"Token standards have been evolving in the industry through standards like ERC-20/ERC-721, and the Web3 signing wallets that support these.
Hyperledger FireFly bring this same standardization to the application tier. Providing APIs that work across token standards, and blockchain implementations, providing consistent and interoperable support.
This means one application or set of back-end systems, can integrate with multiple blockchains, and different token implementations.
Pluggability here is key, so that the rules of governance of each digital asset ecosystem can be exposed and enforced. Whether tokens are fungible, non-fungible, or some hybrid in between.
Learn more about token standards for fungible tokens, and non-fungible tokens (NFTs) in this set of tutorials
"},{"location":"overview/key_components/digital_assets/#transfer-history-audit-trail","title":"Transfer history / audit trail","text":"For efficiency blockchains do not provide a direct ability to query historical transaction information.
Depending on the blockchain technology, even the current balance of your wallet can be complex to calculate - particularly for blockchain technologies based on an Unspent Transaction Output (UTXO) model.
So off-chain indexing of transaction history is an absolute must-have for any digital asset solution.
Hyperledger FireFly provides:
Wallet and signing-key management is a critical requirement for any blockchain solution, particularly those involving the transfer of digital assets between wallets.
Hyperledger FireFly provides you the ability to:
The reality of most Web3 scenarios is that only a small part of the overall use-case can be represented inside the blockchain or distributed ledger technology.
Some additional data flow is always required. This does not diminish the value of executing the kernel of the logic within the blockchain itself.
Hyperledger FireFly embraces this reality, and allows an organization to keep track of the relationship between the off-chain data flow, and the on-chain transactions.
Let's look at a few common examples:
"},{"location":"overview/key_components/flows/#digital-asset-transfers","title":"Digital Asset Transfers","text":"Examples of common data flows performed off-chain, include Know Your Customer (KYC) and Anti Money Laundering (AML) checks that need to be performed and validated before participating in transactions.
There might also be document management and business transaction flows required to verify the conditions are correct to digitally settle a transaction. Have the goods been delivered? Are the contracts in place?
In regulated enterprise scenarios it is common to see a 10-to-1 difference in the number of steps performed off-chain to complete a business transaction, vs. the number of steps performed on-chain.
These off-chain data flows might be coordinated with on-chain smart contracts that lock assets in digital escrow until the off-chain steps are completed by each party, and protect each party while the steps are being completed.
A common form of digital escrow is a Hashed Timelock Contract (HTLC).
"},{"location":"overview/key_components/flows/#non-fungible-tokens-nfts-and-hash-pinning","title":"Non-fungible Tokens (NFTs) and hash-pinning","text":"The data associated with an NFT might be as simple as a JSON document pointing at an interesting piece of artwork, or as complex a set of high resolution scans / authenticity documents representing a digital twin of a real world object.
Here the concept of a hash pinning is used - allowing anyone who has a copy of the original data to recreate the hash that is stored in the on-chain record.
With even the simplest NFT the business data is not stored on-chain, so simple data flow is always required to publish/download the off-chain data.
The data might be published publicly for anyone to download, or it might be sensitive and require a detailed permissioning flow to obtain it from a current holder of that data.
"},{"location":"overview/key_components/flows/#dynamic-nfts-and-business-transaction-flow","title":"Dynamic NFTs and Business Transaction Flow","text":"In an enterprise context, an NFT might have a dynamic ever-evolving trail of business transaction data associated with it. Different parties might have different views of that business data, based on their participation in the business transactions associated with it.
Here the NFT becomes a like a foreign key integrated across the core systems of a set of enterprises working together in a set of business transactions.
The data itself needs to be downloaded, retained, processed and rendered. Probably integrated to systems, acted upon, and used in multiple exchanges between companies on different blockchains, or off-chain.
The business process is accelerated through this Enterprise NFT on the blockchain - as all parties have matched or bound their own private data store to that NFT. This means they are confident to be executing a business transaction against the same person or thing in the world.
"},{"location":"overview/key_components/flows/#data-and-transaction-flow-patterns","title":"Data and Transaction Flow patterns","text":"Hyperledger FireFly provides the raw tools for building data and transaction flow patterns, such as storing, hashing and transferring data. It provides the event bus to trigger off-chain applications and integration to participate in the flows.
It also provides the higher level flow capabilities that are needed for multiple parties to build sophisticated transaction flows together, massively simplifying the application logic required:
Learn more in Multiparty Process Flows
"},{"location":"overview/key_components/orchestration_engine/","title":"Orchestration Engine","text":""},{"location":"overview/key_components/orchestration_engine/#firefly-core","title":"FireFly Core","text":"At the core of Hyperledger FireFly is an event-driven engine that routes, indexed, aggregates, and sequences data to and from the blockchain, and other connectors.
"},{"location":"overview/key_components/orchestration_engine/#data-layer","title":"Data Layer","text":"Your own private view of the each network you connect:
Whether a few dozen companies in a private blockchain consortium, or millions of users connected to a public blockchain network - one thing is always true:
Decentralized applications are event-driven.
In an enterprise context, you need to think not only about how those events are being handled and made consistent within the blockchain layer, but also how those events are being processed and integrated to your core systems.
FireFly provides you with the reliable streams of events you need, as well as the interfaces to subscribe to those events and integrate them into your core systems.
Learn more about the event bus and event-driven programming in this reference document
"},{"location":"overview/key_components/security/","title":"Security","text":""},{"location":"overview/key_components/security/#api-security","title":"API Security","text":"Hyperledger FireFly provides a pluggable infrastructure for authenticating API requests.
Each namespace can be configured with a different authentication plugin, such that different teams can have different access to resources on the same FireFly server.
A reference plugin implementation is provided for HTTP Basic Auth, combined with a htpasswd verification of passwords with a bcrypt encoding.
See this config section for details, and the reference implementation in Github
Pre-packaged vendor extensions to Hyperledger FireFly are known to be available, addressing more comprehensive role-based access control (RBAC) and JWT/OAuth based security models.
"},{"location":"overview/key_components/security/#data-partitioning-and-tenancy","title":"Data Partitioning and Tenancy","text":"Namespaces also provide a data isolation system for different applications / teams / tenants sharing a Hyperledger FireFly node.
Data is partitioned within the FireFly database by namespace. It is also possible to increase the separation between namespaces, by using separate database configurations. For example to different databases or table spaces within a single database server, or even to different database servers.
"},{"location":"overview/key_components/security/#private-data-exchange","title":"Private Data Exchange","text":"FireFly has a pluggable implementation of a private data transfer bus. This transport supports both structured data (conforming to agreed data formats), and large unstructured data & documents.
A reference microservice implementation is provided for HTTPS point-to-point connectivity with mutual TLS encryption.
See the reference implementation in Github
Pre-packaged vendor extensions to Hyperledger FireFly are known to be available, addressing message queue based reliable delivery of messages, hub-and-spoke connectivity models, chunking of very large file payloads, and end-to-end encryption.
Learn more about these private data flows in Multiparty Process Flows.
"},{"location":"overview/key_components/tools/","title":"Tools","text":""},{"location":"overview/key_components/tools/#firefly-cli","title":"FireFly CLI","text":"The FireFly CLI can be used to create local FireFly stacks for offline development of blockchain apps. This allows developers to rapidly iterate on their idea without needing to set up a bunch of infrastructure before they can write the first line of code.
"},{"location":"overview/key_components/tools/#firefly-sandbox","title":"FireFly Sandbox","text":"The FireFly Sandbox sits logically outside the Supernode, and it acts like an \"end-user\" application written to use FireFly's API. In your setup, you have one Sandbox per member, each talking to their own FireFly API. The purpose of the Sandbox is to provide a quick and easy way to try out all of the fundamental building blocks that FireFly provides. It also shows developers, through example code snippets, how they would implement the same functionality in their own app's backend.
\ud83d\uddd2 Technical details: The FireFly Sandbox is an example \"full-stack\" web app. It has a backend written in TypeScript / Node.js, and a frontend in TypeScript / React. When you click a button in your browser, the frontend makes a request to the backend, which then uses the FireFly Node.js SDK to make requests to FireFly's API.
"},{"location":"overview/key_components/tools/#firefly-explorer","title":"FireFly Explorer","text":"The FireFly explorer is a part of FireFly Core itself. It is a view into the system that allows operators to monitor the current state of the system and investigate specific transactions, messages, and events. It is also a great way for developers to see the results of running their code against FireFly's API.
"},{"location":"overview/multiparty/","title":"Enterprise multi-party systems","text":""},{"location":"overview/multiparty/#introduction","title":"Introduction","text":"Multiparty mode has all the features in Gateway mode with the added benefit of multi-party process flows.
A multi-party system is a class of application empowered by the technology revolution of blockchain digital ledger technology (DLT), and emerging cryptographic proof technologies like zero-knowledge proofs (ZKPs) and trusted execution environments (TEEs).
By combining these technologies with existing best practice technologies for data security in regulated industries, multi-party systems allow businesses to collaborate in ways previously impossible.
Through agreement on a common source of truth, such as the completion of a step in a business process to proceed, or the existence and ownership of a unique asset, businesses can cut out huge inefficiencies in existing multi-party processes.
New business and transaction models can be achieved, unlocking value in assets and data that were previously siloed within a single organization. Governance and incentive models can be created to enable secure collaboration in new ways, without compromising the integrity of an individual organization.
The technology is most powerful in ecosystems of \"coopetition\", where privacy and security requirements are high. Multi-party systems establish new models of trust, with easy to prove outcomes that minimize the need for third party arbitration, and costly investigation into disputes.
"},{"location":"overview/multiparty/#points-of-difference","title":"Points of difference","text":"Integration with existing systems of record is critical to unlock the potential of these new ecosystems. So multi-party systems embrace the existing investments of each party, rather than seeking to unify or replace them.
Multi-party systems are different from centralized third-party systems, because each party retains sovereignty over:
There are many multiparty use cases. An example for healthcare is detailed below.
Patient care requires multiple entities to work together including healthcare providers, insurance companies, and medical systems. Sharing data between these parties is inefficient and prone to errors and patient information must be kept secure and up to date. Blockchain's shared ledger makes it possible to automate data sharing while ensuring accuracy and privacy.
In a Multi-party FireFly system, entities are able to share data privately as detailed in the \"Data Exchange\" section. For example, imagine a scenario where there is one healthcare provider and two insurance companies operating in a multi-party system. Insurance company A may send private data to the healthcare provider that insurance company B is not privy to. While insurance company B may not know the contents of data transferred, it may verify that a transfer of data did occur. This validation is all thats needed to maintain an up to date state of the blockchain.
In a larger healthcare ecosystem with many members, a similar concept may emerge with multiple variations of members.
"},{"location":"overview/multiparty/broadcast/","title":"Broadcast / shared data","text":""},{"location":"overview/multiparty/broadcast/#introduction","title":"Introduction","text":"Multi-party systems are about establishing a shared source of truth, and often that needs to include certain reference data that is available to all parties in the network. The data needs to be \"broadcast\" to all members, and also need to be available to new members that join the network
"},{"location":"overview/multiparty/broadcast/#blockchain-backed-broadcast","title":"Blockchain backed broadcast","text":"In order to maintain a complete history of all broadcast data for new members joining the network, FireFly uses the blockchain to sequence the broadcasts with pinning transactions referring to the data itself.
Using the blockchain also gives a global order of events for these broadcasts, which allows them to be processed by each member in a way that allows them to derive the same result - even though the processing logic on the events themselves is being performed independently by each member.
For more information see Multiparty Event Sequencing.
"},{"location":"overview/multiparty/broadcast/#shared-data","title":"Shared data","text":"The data included in broadcasts is not recorded on the blockchain. Instead a pluggable shared storage mechanism is used to contain the data itself. The on-chain transaction just contains a hash of the data that is stored off-chain.
This is because the data itself might be too large to be efficiently stored and transferred via the blockchain itself, or subject to deletion at some point in the future through agreement by the members in the network.
While the data should be reliably stored with visibility to all members of the network, the data can still be secured from leakage outside of the network.
The InterPlanetary File System (IPFS) is an example of a distributed technology for peer-to-peer storage and distribution of such data in a decentralized multi-party system. It provides secure connectivity between a number of nodes, combined with a decentralized index of data that is available, and native use of hashes within the technology as the way to reference data by content.
"},{"location":"overview/multiparty/broadcast/#firefly-built-in-broadcasts","title":"FireFly built-in broadcasts","text":"FireFly uses the broadcast mechanism internally to distribute key information to all parties in the network:
These definitions rely on the same assurances provided by blockchain backed broadcast that FireFly applications do.
Private data exchange is the way most enterprise business-to-business communication happens today. One party privately sends data to another, over a pipe that has been agreed as sufficiently secure between the two parties. That might be a REST API, SOAP Web Service, FTP / EDI, Message Queue (MQ), or other B2B Gateway technology.
The ability to perform these same private data exchanges within a multi-party system is critical. In fact it's common for the majority of business data continue to transfer over such interfaces.
So real-time application to application private messaging, and private transfer of large blobs/documents, are first class constructs in the FireFly API.
"},{"location":"overview/multiparty/data_exchange/#qualities-of-service","title":"Qualities of service","text":"FireFly recognizes that a multi-party system will need to establish a secure messaging backbone, with the right qualities of service for their requirements. So the implementation is pluggable, and the plugin interface embraces the following quality of service characteristics that differ between different implementations.
A reference implementation of a private data exchange is provided as part of the FireFly project. This implementation uses peer-to-peer transfer over a synchronous HTTPS transport, backed by Mutual TLS authentication. X509 certificate exchange is orchestrated by FireFly, such that self-signed certificates can be used (or multiple PKI trust roots) and bound to the blockchain-backed identities of the organizations in FireFly.
See hyperledger/firefly-dataexchange-https
"},{"location":"overview/multiparty/deterministic/","title":"Deterministic Compute","text":""},{"location":"overview/multiparty/deterministic/#introduction","title":"Introduction","text":"A critical aspect of designing a multi-party systems, is choosing where you exploit the blockchain and other advanced cryptography technology to automate agreement between parties.
Specifically where you rely on the computation itself to come up with a result that all parties can independently trust. For example because all parties performed the same computation independently and came up with the same result, against the same data, and agreed to that result using a consensus algorithm.
The more sophisticated the agreement is you want to prove, the more consideration needs to be taken into factors such as:
FireFly embraces the fact that different use cases, will make different decisions on how much of the agreement should be enforced through deterministic compute.
Also that multi-party systems include a mixture of approaches in addition to deterministic compute, including traditional off-chain secure HTTP/Messaging, documents, private non-deterministic logic, and human workflows.
"},{"location":"overview/multiparty/deterministic/#the-fundamental-building-blocks","title":"The fundamental building blocks","text":"There are some fundamental types of deterministic computation, that can be proved with mature blockchain technology, and all multi-party systems should consider exploiting:
There are use cases where a deterministic agreement on computation is desired, but the data upon which the execution is performed cannot be shared between all the parties.
For example proving total conservation of value in a token trading scenario, without knowing who is involved in the individual transactions. Or providing you have access to a piece of data, without disclosing what that data is.
Technologies exist that can solve these requirements, with two major categories:
FireFly today provides an orchestration engine that's helpful in coordinating the inputs, outputs, and execution of such advanced cryptography technologies.
Active collaboration between the FireFly and other projects like Hyperledger Avalon, and Hyperledger Cactus, is evolving how these technologies can plug-in with higher level patterns.
"},{"location":"overview/multiparty/deterministic/#complementary-approaches-to-deterministic-computation","title":"Complementary approaches to deterministic computation","text":"Enterprise multi-party systems usually operate differently to end-user decentralized applications. In particular, strong identity is established for the organizations that are involved, and those organizations usually sign legally binding commitments around their participation in the network. Those businesses then bring on-board an ecosystem of employees and or customers that are end-users to the system.
So the shared source of truth empowered by the blockchain and other cryptography are not the only tools that can be used in the toolbox to ensure correct behavior. Recognizing that there are real legal entities involved, that are mature and regulated, does not undermine the value of the blockchain components. In fact it enhances it.
A multi-party system can use just enough of this secret sauce in the right places, to change the dynamics of trust such that competitors in a market are willing to create value together that could never be created before.
Or create a system where parties can share data with each other while still conforming to their own regulatory and audit commitments, that previously would have been impossible to share.
Not to be overlooked is the sometimes astonishing efficiency increase that can be added to existing business relationships, by being able to agree the order and sequence of a set of events. Having the tools to digitize processes that previously took physical documents flying round the world, into near-immediate digital agreement where the arbitration of a dispute can be resolved at a tiny fraction of what would have been possible without a shared and immutable audit trail of who said what when.
"},{"location":"overview/multiparty/multiparty_flow/","title":"Multiparty Process Flows","text":""},{"location":"overview/multiparty/multiparty_flow/#flow-features","title":"Flow features","text":"Data, value, and process flow are how decentralized systems function. In an enterprise context not all of this data can be shared with all parties, and some is very sensitive.
"},{"location":"overview/multiparty/multiparty_flow/#private-data-flow","title":"Private data flow","text":"Managing the flows of data so that the right information is shared with the right parties, at the right time, means thinking carefully about what data flows over what channel.
The number of enterprise solutions where all data can flow directly through the blockchain, is vanishingly small.
Coordinating these different data flows is often one of the biggest pieces of heavy lifting solved on behalf of the application by a robust framework like FireFly:
Web3 has the potential to transform how ecosystems interact. Digitally transforming legacy process flows, by giving deterministic outcomes that are trusted by all parties, backed by new forms of digital trust between parties.
Some of the most interesting use cases require complex multi-step business process across participants. The Web3 version of business process management, comes with a some new challenges.
So you need the platform to:
Business processes need data, and that data comes in many shapes and sizes.
The platform needs to handle all of them:
The ability to globally sequence events across parties is a game changing capability of a multiparty system. FireFly is designed to allow developers to harnesses that power in the application layer, to build sophisticated multi-party APIs and user experiences.
Building a successful multi-party system is often about business experimentation, and business results. Proving the efficiency gains, and new business models, made possible by working together in a new way under a new system of trust.
Things that can get in the way of that innovation, can include concerns over data privacy, technology maturity, and constraints on autonomy of an individual party in the system. An easy to explain position on how new technology components are used, where data lives, and how business process independence is maintained can really help parties make the leap of faith necessary to take the step towards a new model.
Keys to success often include building great user experiences that help digitize clunky decades old manual processes. Also easy to integrate with APIs, what embrace the existing core systems of record that are establish within each party.
"},{"location":"overview/multiparty/multiparty_flow/#consider-the-on-chain-toolbox-too","title":"Consider the on-chain toolbox too","text":"There is a huge amount of value that deterministic execution of multi-party logic within the blockchain can add. However, the more compute is made fully deterministic via a blockchain consensus algorithm validated by multiple parties beyond those with a business need for access to the data, the more sensitivity needs to be taken to data privacy. Also bear in mind any data that is used in this processing becomes immutable - it can never be deleted.
The core constructs of blockchain are a great place to start. Almost every process can be enhanced with pre-built fungible and non-fungible tokens, for example. Maybe it's to build a token economy that enhances the value parties get from the system, or to encourage healthy participation (and discourage bad behavior). Or maybe it's to track exactly which party owns a document, asset, or action within a process using NFTs.
On top of this you can add advanced tools like digital escrow, signature / threshold based voting on outcomes, and atomic swaps of value/ownership.
The investment in building this bespoke on-chain logic is higher than building the off-chain pieces (and there are always some off-chain pieces as we've discussed), so it's about finding the kernel of value the blockchain can provide to differentiate your solution from a centralized database solution.
The power provided by deterministic sequencing of events, attested by signatures, and pinned to private data might be sufficient for some cases. In others the token constructs are the key value that differentiates the decentralized ecosystem. Whatever it is, it's important it is identified and crafted carefully.
Note that advanced privacy preserving techniques such as zero-knowledge proofs (ZKP) are gaining traction and hardening in their production readiness and efficiency. Expect these to play an increasing role in the technology stack of multiparty systems (and Hyperledger FireFly) in the future.
Learn more in the Deterministic Compute section.
"},{"location":"reference/api_post_syntax/","title":"API POST Syntax","text":""},{"location":"reference/api_post_syntax/#syntax-overview","title":"Syntax Overview","text":"Endpoints that allow submitting a transaction allow an optional query parameter called confirm. When confirm=true is set in the query string, FireFly will wait to send an HTTP response until the message has been confirmed. This means, where a blockchain transaction is involved, the HTTP request will not return until the blockchain transaction is complete.
This is useful for endpoints such as registration, where the client app cannot proceed until the transaction is complete and the member/node is registered. Rather than making a request to register a member/node and then repeatedly polling the API to check to see if it succeeded, an HTTP client can use this query parameter and block until registration is complete.
NOTE: This does not mean that any other member of the network has received, processed, or responded to the message. It just means that the transaction is complete from the perspective of the FireFly node to which the transaction was submitted.
"},{"location":"reference/api_post_syntax/#example-api-call","title":"Example API Call","text":"POST /api/v1/messages/broadcast?confirm=true
This will broadcast a message and wait for the message to be confirmed before returning.
"},{"location":"reference/api_query_syntax/","title":"API Query Syntax","text":""},{"location":"reference/api_query_syntax/#syntax-overview","title":"Syntax Overview","text":"REST collections provide filter, skip, limit and sort support.
field=[modifiers][operator]match-stringGET /api/v1/messages?confirmed=>0&type=broadcast&topic=t1&topic=t2&context=@someprefix&sort=sequence&descending&skip=100&limit=50
This states:
confirmed greater than 0type exactly equal to broadcasttopic exactly equal to t1 or t2context containing the case-sensitive string someprefixsequence in descending orderlimit of 50 and skip of 100 (e.g. get page 3, with 50/page)Table of filter operations, which must be the first character of the query string (after the = in the above URL path example)
Operators are a type of comparison operation to perform against the match string.
Operator Description= Equal (none) Equal (shortcut) @ Containing ^ Starts with $ Ends with << Less than < Less than (shortcut) <= Less than or equal >> Greater than > Greater than (shortcut) >= Greater than or equal Shortcuts are only safe to use when your match string starts with a-z, A-Z, 0-9, - or _.
Modifiers can appear before the operator, to change its behavior.
Modifier Description! Not - negates the match : Case insensitive ? Treat empty match string as null [ Combine using AND on the same field ] Combine using OR on the same field (default)"},{"location":"reference/api_query_syntax/#detailed-examples","title":"Detailed examples","text":"Example Description cat Equals \"cat\" =cat Equals \"cat\" (same) !=cat Not equal to \"cat\" :=cat Equal to \"CAT\", \"cat\", \"CaT etc. !:cat Not equal to \"CAT\", \"cat\", \"CaT etc. =!cat Equal to \"!cat\" (! is after operator) ^cats/ Starts with \"cats/\" $_cat Ends with with \"_cat\" !:^cats/ Does not start with \"cats/\", \"CATs/\" etc. !$-cat Does not end with \"-cat\" ?= Is null !?= Is not null"},{"location":"reference/api_query_syntax/#time-range-example","title":"Time range example","text":"For this case we need to combine multiple queries on the same created field using AND semantics (with the [) modifier:
?created=[>>2021-01-01T00:00:00Z&created=[<=2021-01-02T00:00:00Z\n So this means:
created greater than 2021-01-01T00:00:00ZANDcreated less than or equal to 2021-01-02T00:00:00ZThe receipt for a FireFly blockchain operation contains an extraInfo section that records additional information about the transaction. For example:
\"receipt\": {\n ...\n \"extraInfo\": [\n {\n {\n \"contractAddress\":\"0x87ae94ab290932c4e6269648bb47c86978af4436\",\n \"cumulativeGasUsed\":\"33812\",\n \"from\":\"0x2b1c769ef5ad304a4889f2a07a6617cd935849ae\",\n \"to\":\"0x302259069aaa5b10dc6f29a9a3f72a8e52837cc3\",\n \"gasUsed\":\"33812\",\n \"status\":\"0\",\n \"errorMessage\":\"Not enough tokens\", \n }\n }\n ],\n ...\n},\n The errorMessage field can be be set by a blockchain connector to provide FireFly and the end-user with more information about the reason why a tranasction failed. The blockchain connector can choose what information to include in errorMessage field. It may be set to an error message relating to the blockchain connector itself or an error message passed back from the blockchain or smart contract that was invoked.
If FireFly is configured to connect to a Besu EVM client, and Besu has been configured with the revert-reason-enabled=true setting (note - the default value for Besu is false) error messages passed to FireFly from the blockchain client itself will be set correctly in the FireFly blockchain operation. For example:
\"errorMessage\":\"Not enough tokens\" for a revert error string from a smart contractIf the smart contract uses a custom error type, Besu will return the revert reason to FireFly as a hexadecimal string but FireFly will be unable to decode it into. In this case the blockchain operation error message and return values will be set to:
\"errorMessage\":\"FF23053: Error return value for custom error: <revert hex string>\"returnValue\":\"<revert hex string>\"A future update to FireFly could be made to automatically decode custom error revert reasons if FireFly knows the ABI for the custom error. See FireFly issue 1466 which describes the current limitation.
If FireFly is configured to connect to Besu without revert-reason-enabled=true the error message will be set to:
\"errorMessage\":\"FF23054: Error return value unavailable\"The precise format of the error message in a blockchain operation can vary based on different factors. The sections below describe in detail how the error message is populted, with specific references to the firefly-evmconnect blockchain connector.
firefly-evmconnect Error Message","text":"The following section describes the way that the firefly-evmconnect plugin uses the errorMessage field. This serves both as an explanation of how EVM-based transaction errors will be formatted, and as a guide that other blockchain connectors may decide to follow.
The errorMessage field for a firefly-evmconnect transaction may contain one of the following:
\"FF23054: Error return value unavailable\"Not enough tokensrequire(requestedTokens <= allowance, \"Not enough tokens\");FF23053: Error return value for custom error: 0x1320fa6a00000000000000000000000000000000000000000000000000000000000000640000000000000000000000000000000000000000000000000000000000000010\nerror AllowanceTooSmall(uint256 requested, uint256 allowance);\n...\nrevert AllowanceTooSmall({ requested: 100, allowance: 20 });\nreturnValue of the extraInfo will be set to the raw byte string. For example: \"receipt\": {\n ...\n \"extraInfo\": [\n {\n {\n \"contractAddress\":\"0x87ae94ab290932c4e6269648bb47c86978af4436\",\n \"cumulativeGasUsed\":\"33812\",\n \"from\":\"0x2b1c769ef5ad304a4889f2a07a6617cd935849ae\",\n \"to\":\"0x302259069aaa5b10dc6f29a9a3f72a8e52837cc3\",\n \"gasUsed\":\"33812\",\n \"status\":\"0\",\n \"errorMessage\":\"FF23053: Error return value for custom error: 0x1320fa6a00000000000000000000000000000000000000000000000000000000000000640000000000000000000000000000000000000000000000000000000000000010\", \n \"returnValue\":\"0x1320fa6a00000000000000000000000000000000000000000000000000000000000000640000000000000000000000000000000000000000000000000000000000000010\"\n }\n }\n ],\n ...\n},\nThe ability of a blockchain connector such as firefly-evmconnect to retrieve the reason for a transaction failure, is dependent on by the configuration of the blockchain it is connected to. For an EVM blockchain the reason why a transaction failed is recorded with the REVERT op code, with a REASON set to the reason for the failure. By default, most EVM clients do not store this reason in the transaction receipt. This is typically to reduce resource consumption such as memory usage in the client. It is usually possible to configure an EVM client to store the revert reason in the transaction receipt. For example Hyperledger Besu\u2122 provides the --revert-reason-enabled configuration option. If the transaction receipt does not contain the revert reason it is possible to request that an EVM client re-run the transaction and return a trace of all of the op-codes, including the final REVERT REASON. This can be a resource intensive request to submit to an EVM client, and is only available on archive nodes or for very recent blocks.
The firefly-evmconnect blockchain connector attempts to obtain the reason for a transaction revert and include it in the extraInfo field. It uses the following mechanisms, in this order:
connector.traceTXForRevertReason configuration option is set to true, calls debug_traceTransaction to obtain a full trace of the transaction and extract the revert reason. By default, connector.traceTXForRevertReason is set to false to avoid submitting high-resource requests to the EVM client.If the revert reason can be obtained using either mechanism above, the revert reason bytes are decoded in the following way: - Attempts to decode the bytes as the standard Error(string) signature format and includes the decoded string in the errorMessage - If the reason is not a standard Error(String) error, sets the errorMessage to FF23053: Error return value for custom error: <raw hex string> and includes the raw byte string in the returnValue field.
Every FireFly Transaction can involve zero or more Operations. Blockchain operations are handled by the blockchain connector configured for the namespace and represent a blockchain transaction being handled by that connector.
"},{"location":"reference/blockchain_operation_status/#blockchain-operation-status_1","title":"Blockchain Operation Status","text":"A blockchain operation can require the connector to go through various stages of processing in order to successfully confirm the transaction on the blockchain. The orchestrator in FireFly receives updates from the connector to indicate when the operation has been completed and determine when the FireFly transaction as a whole has finished. These updates must contain enough information to correlate the operation to the FireFly transaction but it can be useful to see more detailed information about how the transaction was processed.
FireFly 1.2 introduced the concept of sub-status types that allow a blockchain connector to distinguish between the intermediate steps involved in progressing a transaction. It also introduced the concept of an action which a connector might carry out in order to progress between types of sub-status. This can be described as a state machine as shown in the following diagram:
To access detailed information about a blockchain operation FireFly 1.2 introduced a new query parameter, fetchStatus, to the /transaction/{txid}/operation/{opid} API. When FireFly receives an API request that includes the fetchStatus query parameter it makes a synchronous call directly to the blockchain connector, requesting all of blockchain transaction detail it has. This payload is then included in the FireFly operation response under a new detail field.
{\n \"id\": \"04a8b0c4-03c2-4935-85a1-87d17cddc20a\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"namespace\": \"ns1\",\n \"tx\": \"99543134-769b-42a8-8be4-a5f8873f969d\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n // Input used to initiate the blockchain operation\n },\n \"output\": {\n // Minimal blockchain operation data necessary\n // to resolve the FF transaction\n },\n \"detail\": {\n // Full blockchain operation information, including sub-status\n // transitions that took place for the operation to succeed.\n }\n}\n"},{"location":"reference/blockchain_operation_status/#detail-status-structure","title":"Detail Status Structure","text":"The structure of a blockchain operation follows the structure described in Operations. In FireFly 1.2, 2 new attributes were added to that structure to allow more detailed status information to be recorded:
The history field is designed to record an ordered list of sub-status changes that the transaction has gone through. Within each sub-status change are the actions that have been carried out to try and move the transaction on to a new sub-status. Some transactions might spend a long time going looping between different sub-status types so this field records the N most recent sub-status changes (where the size of N is determined by blockchain connector and its configuration). The follow example shows a transaction going starting at Received, moving to Tracking, and finally ending up as Confirmed. In order to move from Received to Tracking several actions were performed: AssignNonce, RetrieveGasPrice, and SubmitTransaction.
{\n ...\n \"lastSubmit\": \"2023-01-27T17:11:41.222375469Z\",\n \"nonce\": \"14\",\n \"history\": [\n {\n \"subStatus\": \"Received\",\n \"time\": \"2023-01-27T17:11:41.122965803Z\",\n \"actions\": [\n {\n \"action\": \"AssignNonce\",\n \"count\": 1,\n \"lastInfo\": {\n \u2003 \"nonce\": \"14\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"time\": \"2023-01-27T17:11:41.122967136Z\"\n },\n \u2003 {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"0\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"time\": \"2023-01-27T17:11:41.161213094Z\"\n },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x4c37de1cf320a1d5c949082bbec8ad5fe918e6621cec3948d609ec3f7deac243\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \u2003 \"time\": \"2023-01-27T17:11:41.222374553Z\"\n \u2003 }\n \u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:11:41.222400219Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"ReceiveReceipt\",\n \u2003\u2003\u2003\u2003 \"count\": 2,\n \u2003\u2003\u2003\u2003 \"lastInfo\": {\n \u2003\u2003\u2003\u2003\u2003 \"protocolId\": \"000001265122/000000\"\n \u2003\u2003\u2003\u2003 },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:11:57.93120838Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:11:47.930332625Z\"\n \u2003\u2003\u2003 },\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"Confirm\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:12:02.660275549Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:12:02.660275382Z\"\n \u2003\u2003\u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Confirmed\",\n \u2003\u2003 \"time\": \"2023-01-27T17:12:02.660309382Z\",\n \u2003\u2003 \"actions\": [],\n \u2003 }\n ]\n ...\n}\n Because the history field is a FIFO structure describing the N most recent sub-status changes, some early sub-status changes or actions may be lost over time. For example an action of assignNonce might only happen once when the transaction is first processed by the connector. The historySummary field ensures that a minimal set of information is kept about every single subStatus type and action that has been recorded.
{\n ...\n \"historySummary\": [\n {\n \"count\": 1,\n \u2003 \"firstOccurrence\": \"2023-01-27T17:11:41.122966136Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.122966136Z\",\n \u2003 \"subStatus\": \"Received\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"action\": \"AssignNonce\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"action\": \"RetrieveGasPrice\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \"action\": \"SubmitTransaction\"\n },\n {\n \u2003 \"count\": 1,\n \u2003 \"firstOccurrence\": \"2023-01-27T17:11:41.222400678Z\",\n \"lastOccurrence\": \"\",\n \u2003 \"subStatus\": \"Tracking\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:11:57.93120838Z\",\n \"lastOccurrence\": \"2023-01-27T17:11:57.93120838Z\",\n \"action\": \"ReceiveReceipt\"\n },\n {\n \"count\": 1,\n \"firstOccurrence\": \"2023-01-27T17:12:02.660309382Z\",\n \"lastOccurrence\": \"2023-01-27T17:12:02.660309382Z\",\n \"action\": \"Confirm\"\n },\n {\n \u2003 \"count\": 1,\n \u2003 \"firstOccurrence\": \"2023-01-27T17:12:02.660309757Z\",\n \"lastOccurrence\": \"2023-01-27T17:12:02.660309757Z\",\n \u2003 \"subStatus\": \"Confirmed\"\n }\n ]\n}\n"},{"location":"reference/blockchain_operation_status/#public-chain-operations","title":"Public Chain Operations","text":"Blockchain transactions submitted to a public chain, for example to Polygon PoS, might take longer and involve more sub-status transitions before being confirmed. One reason for this could be because of gas price fluctuations of the chain. In this case the history for a public blockchain operation might include a large number of subStatus entries. Using the example sub-status values above, a blockchain operation might move from Tracking to Stale, back to Tracking, back to Stale and so on.
Below is an example of the history for a public blockchain operation.
{\n ...\n \"lastSubmit\": \"2023-01-27T17:11:41.222375469Z\",\n \"nonce\": \"14\",\n \"history\": [\n {\n \"subStatus\": \"Received\",\n \"time\": \"2023-01-27T17:11:41.122965803Z\",\n \"actions\": [\n {\n \"action\": \"AssignNonce\",\n \"count\": 1,\n \"lastInfo\": {\n \u2003 \"nonce\": \"1\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.122967219Z\",\n \"time\": \"2023-01-27T17:11:41.122967136Z\"\n },\n \u2003 {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"34422243\"\n },\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"time\": \"2023-01-27T17:11:41.161213094Z\"\n },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x83ba5e1cf320a1d5c949082bbec8ae7fe918e6621cec39478609ec3f7deacbdb\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \u2003 \"time\": \"2023-01-27T17:11:41.222374553Z\"\n \u2003 }\n \u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:11:41.222400219Z\",\n \u2003 \"actions\": [],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Stale\",\n \"time\": \"2023-01-27T17:13:21.222100434Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"RetrieveGasPrice\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"44436243\"\n },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:13:22.93120838Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:13:22.93120838Z\"\n \u2003\u2003\u2003 },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x7b3a5e1ccbc0a1d5c949082bbec8ae7fe918e6621cec39478609ec7aea6103d5\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:13:32.656374637Z\",\n \u2003 \"time\": \"2023-01-27T17:13:32.656374637Z\"\n \u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:13:33.434400219Z\",\n \u2003 \"actions\": [],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Stale\",\n \"time\": \"2023-01-27T17:15:21.222100434Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"RetrieveGasPrice\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \"lastInfo\": {\n \"gasPrice\": \"52129243\"\n },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:15:22.93120838Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:15:22.93120838Z\"\n \u2003\u2003\u2003 },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \u2003 \"lastInfo\": {\n \"txHash\": \"0x89995e1ccbc0a1d5c949082bbec8ae7fe918e6621cec39478609ec7a8c64abc\"\n },\n \u2003 \"lastOccurrence\": \"2023-01-27T17:15:32.656374637Z\",\n \u2003 \"time\": \"2023-01-27T17:15:32.656374637Z\"\n \u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:15:33.434400219Z\",\n \u2003 \"actions\": [\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"ReceiveReceipt\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \u2003\u2003\u2003\u2003 \"lastInfo\": {\n \u2003\u2003\u2003\u2003\u2003 \"protocolId\": \"000004897621/000000\"\n \u2003\u2003\u2003\u2003 },\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:15:33.94120833Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:15:33.94120833Z\"\n \u2003\u2003\u2003 },\n \u2003\u2003\u2003 {\n \u2003\u2003\u2003\u2003 \"action\": \"Confirm\",\n \u2003\u2003\u2003\u2003 \"count\": 1,\n \u2003\u2003\u2003\u2003 \"lastOccurrence\": \"2023-01-27T17:16:02.780275549Z\",\n \u2003\u2003\u2003\u2003 \"time\": \"2023-01-27T17:16:02.780275382Z\"\n \u2003\u2003\u2003 }\n \u2003\u2003 ],\n \u2003 },\n \u2003 {\n \u2003\u2003 \"subStatus\": \"Confirmed\",\n \u2003\u2003 \"time\": \"2023-01-27T17:16:03.990309381Z\",\n \u2003\u2003 \"actions\": [],\n \u2003 }\n ]\n ...\n}\n"},{"location":"reference/config/","title":"Configuration Reference","text":""},{"location":"reference/config/#admin","title":"admin","text":"Key Description Type Default Value enabled Deprecated - use spi.enabled instead boolean <nil>"},{"location":"reference/config/#api","title":"api","text":"Key Description Type Default Value defaultFilterLimit The maximum number of rows to return if no limit is specified on an API request int 25 dynamicPublicURLHeader Dynamic header that informs the backend the base public URL for the request, in order to build URL links in OpenAPI/SwaggerUI string <nil> maxFilterLimit The largest value of limit that an HTTP client can specify in a request int 1000 passthroughHeaders A list of HTTP request headers to pass through to dependency microservices []string [] requestMaxTimeout The maximum amount of time that an HTTP client can specify in a Request-Timeout header to keep a specific request open time.Duration 10m requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 120s"},{"location":"reference/config/#assetmanager","title":"asset.manager","text":"Key Description Type Default Value keyNormalization Mechanism to normalize keys before using them. Valid options are blockchain_plugin - use blockchain plugin (default) or none - do not attempt normalization (deprecated - use namespaces.predefined[].asset.manager.keyNormalization) string blockchain_plugin"},{"location":"reference/config/#batchmanager","title":"batch.manager","text":"Key Description Type Default Value minimumPollDelay The minimum time the batch manager waits between polls on the DB - to prevent thrashing time.Duration 100ms pollTimeout How long to wait without any notifications of new messages before doing a page query time.Duration 30s readPageSize The size of each page of messages read from the database into memory when assembling batches int 100"},{"location":"reference/config/#batchretry","title":"batch.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#blobreceiverretry","title":"blobreceiver.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initialDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#blobreceiverworker","title":"blobreceiver.worker","text":"Key Description Type Default Value batchMaxInserts The maximum number of items the blob receiver worker will insert in a batch int 200 batchTimeout The maximum amount of the the blob receiver worker will wait time.Duration 50ms count The number of blob receiver workers int 5"},{"location":"reference/config/#broadcastbatch","title":"broadcast.batch","text":"Key Description Type Default Value agentTimeout How long to keep around a batching agent for a sending identity before disposal string 2m payloadLimit The maximum payload size of a batch for broadcast messages BytesSize 800Kb size The maximum number of messages that can be packed into a batch int 200 timeout The timeout to wait for a batch to fill, before sending time.Duration 1s"},{"location":"reference/config/#cache","title":"cache","text":"Key Description Type Default Value enabled Enables caching, defaults to true boolean true"},{"location":"reference/config/#cacheaddressresolver","title":"cache.addressresolver","text":"Key Description Type Default Value limit Max number of cached items for address resolver int 1000 ttl Time to live of cached items for address resolver string 24h"},{"location":"reference/config/#cachebatch","title":"cache.batch","text":"Key Description Type Default Value limit Max number of cached items for batches int 100 ttl Time to live of cache items for batches string 5m"},{"location":"reference/config/#cacheblockchain","title":"cache.blockchain","text":"Key Description Type Default Value limit Max number of cached items for blockchain int 100 ttl Time to live of cached items for blockchain string 5m"},{"location":"reference/config/#cacheblockchainevent","title":"cache.blockchainevent","text":"Key Description Type Default Value limit Max number of cached blockchain events for transactions int 1000 ttl Time to live of cached blockchain events for transactions string 5m"},{"location":"reference/config/#cacheeventlistenertopic","title":"cache.eventlistenertopic","text":"Key Description Type Default Value limit Max number of cached items for blockchain listener topics int 100 ttl Time to live of cached items for blockchain listener topics string 5m"},{"location":"reference/config/#cachegroup","title":"cache.group","text":"Key Description Type Default Value limit Max number of cached items for groups int 50 ttl Time to live of cached items for groups string 1h"},{"location":"reference/config/#cacheidentity","title":"cache.identity","text":"Key Description Type Default Value limit Max number of cached identities for identity manager int 100 ttl Time to live of cached identities for identity manager string 1h"},{"location":"reference/config/#cachemessage","title":"cache.message","text":"Key Description Type Default Value size Max size of cached messages for data manager BytesSize 50Mb ttl Time to live of cached messages for data manager string 5m"},{"location":"reference/config/#cachemethods","title":"cache.methods","text":"Key Description Type Default Value limit Max number of cached items for schema validations on blockchain methods int 200 ttl Time to live of cached items for schema validations on blockchain methods string 5m"},{"location":"reference/config/#cacheoperations","title":"cache.operations","text":"Key Description Type Default Value limit Max number of cached items for operations int 1000 ttl Time to live of cached items for operations string 5m"},{"location":"reference/config/#cachetokenpool","title":"cache.tokenpool","text":"Key Description Type Default Value limit Max number of cached items for token pools int 100 ttl Time to live of cached items for token pool string 1h"},{"location":"reference/config/#cachetransaction","title":"cache.transaction","text":"Key Description Type Default Value size Max size of cached transactions BytesSize 1Mb ttl Time to live of cached transactions string 5m"},{"location":"reference/config/#cachevalidator","title":"cache.validator","text":"Key Description Type Default Value size Max size of cached validators for data manager BytesSize 1Mb ttl Time to live of cached validators for data manager string 1h"},{"location":"reference/config/#config","title":"config","text":"Key Description Type Default Value autoReload Monitor the configuration file for changes, and automatically add/remove/reload namespaces and plugins boolean <nil>"},{"location":"reference/config/#cors","title":"cors","text":"Key Description Type Default Value credentials CORS setting to control whether a browser allows credentials to be sent to this API boolean true debug Whether debug is enabled for the CORS implementation boolean false enabled Whether CORS is enabled boolean true headers CORS setting to control the allowed headers []string [*] maxAge The maximum age a browser should rely on CORS checks time.Duration 600 methods CORS setting to control the allowed methods []string [GET POST PUT PATCH DELETE] origins CORS setting to control the allowed origins []string [*]"},{"location":"reference/config/#debug","title":"debug","text":"Key Description Type Default Value address The HTTP interface the go debugger binds to string localhost port An HTTP port on which to enable the go debugger int -1"},{"location":"reference/config/#downloadretry","title":"download.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initialDelay The initial retry delay time.Duration 100ms maxAttempts The maximum number attempts int 100 maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#downloadworker","title":"download.worker","text":"Key Description Type Default Value count The number of download workers int 10 queueLength The length of the work queue in the channel to the workers - defaults to 2x the worker count int <nil>"},{"location":"reference/config/#eventaggregator","title":"event.aggregator","text":"Key Description Type Default Value batchSize The maximum number of records to read from the DB before performing an aggregation run BytesSize 200 batchTimeout How long to wait for new events to arrive before performing aggregation on a page of events time.Duration 0ms firstEvent The first event the aggregator should process, if no previous offest is stored in the DB. Valid options are oldest or newest string oldest pollTimeout The time to wait without a notification of new events, before trying a select on the table time.Duration 30s rewindQueryLimit Safety limit on the maximum number of records to search when performing queries to search for rewinds int 1000 rewindQueueLength The size of the queue into the rewind dispatcher int 10 rewindTimeout The minimum time to wait for rewinds to accumulate before resolving them time.Duration 50ms"},{"location":"reference/config/#eventaggregatorretry","title":"event.aggregator.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 100ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#eventdbevents","title":"event.dbevents","text":"Key Description Type Default Value bufferSize The size of the buffer of change events BytesSize 100"},{"location":"reference/config/#eventdispatcher","title":"event.dispatcher","text":"Key Description Type Default Value batchTimeout A short time to wait for new events to arrive before re-polling for new events time.Duration 0ms bufferLength The number of events + attachments an individual dispatcher should hold in memory ready for delivery to the subscription int 5 pollTimeout The time to wait without a notification of new events, before trying a select on the table time.Duration 30s"},{"location":"reference/config/#eventdispatcherretry","title":"event.dispatcher.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 <nil> initDelay The initial retry delay time.Duration <nil> maxDelay The maximum retry delay time.Duration <nil>"},{"location":"reference/config/#eventtransports","title":"event.transports","text":"Key Description Type Default Value default The default event transport for new subscriptions string websockets enabled Which event interface plugins are enabled boolean [websockets webhooks]"},{"location":"reference/config/#eventswebhooks","title":"events.webhooks","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s"},{"location":"reference/config/#eventswebhooksauth","title":"events.webhooks.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#eventswebhooksproxy","title":"events.webhooks.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to connect through string <nil>"},{"location":"reference/config/#eventswebhooksretry","title":"events.webhooks.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#eventswebhooksthrottle","title":"events.webhooks.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#eventswebhookstls","title":"events.webhooks.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#eventswebsockets","title":"events.websockets","text":"Key Description Type Default Value readBufferSize WebSocket read buffer size BytesSize 16Kb writeBufferSize WebSocket write buffer size BytesSize 16Kb"},{"location":"reference/config/#histograms","title":"histograms","text":"Key Description Type Default Value maxChartRows The maximum rows to fetch for each histogram bucket int 100"},{"location":"reference/config/#http","title":"http","text":"Key Description Type Default Value address The IP address on which the HTTP API should listen IP Address string 127.0.0.1 port The port on which the HTTP API should listen int 5000 publicURL The fully qualified public URL for the API. This is used for building URLs in HTTP responses and in OpenAPI Spec generation URL string <nil> readTimeout The maximum time to wait when reading from an HTTP connection time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout The maximum time to wait when writing to an HTTP connection time.Duration 15s"},{"location":"reference/config/#httpauth","title":"http.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#httpauthbasic","title":"http.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#httptls","title":"http.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#log","title":"log","text":"Key Description Type Default Value compress Determines if the rotated log files should be compressed using gzip boolean <nil> filename Filename is the file to write logs to. Backup log files will be retained in the same directory string <nil> filesize MaxSize is the maximum size the log file before it gets rotated BytesSize 100m forceColor Force color to be enabled, even when a non-TTY output is detected boolean <nil> includeCodeInfo Enables the report caller for including the calling file and line number, and the calling function. If using text logs, it uses the logrus text format rather than the default prefix format. boolean false level The log level - error, warn, info, debug, trace string info maxAge The maximum time to retain old log files based on the timestamp encoded in their filename time.Duration 24h maxBackups Maximum number of old log files to retain int 2 noColor Force color to be disabled, event when TTY output is detected boolean <nil> timeFormat Custom time format for logs Time format string 2006-01-02T15:04:05.000Z07:00 utc Use UTC timestamps for logs boolean false"},{"location":"reference/config/#logjson","title":"log.json","text":"Key Description Type Default Value enabled Enables JSON formatted logs rather than text. All log color settings are ignored when enabled. boolean false"},{"location":"reference/config/#logjsonfields","title":"log.json.fields","text":"Key Description Type Default Value file configures the JSON key containing the calling file string file func Configures the JSON key containing the calling function string func level Configures the JSON key containing the log level string level message Configures the JSON key containing the log message string message timestamp Configures the JSON key containing the timestamp of the log string @timestamp"},{"location":"reference/config/#messagewriter","title":"message.writer","text":"Key Description Type Default Value batchMaxInserts The maximum number of database inserts to include when writing a single batch of messages + data int 200 batchTimeout How long to wait for more messages to arrive before flushing the batch time.Duration 10ms count The number of message writer workers int 5"},{"location":"reference/config/#metrics","title":"metrics","text":"Key Description Type Default Value address Deprecated - use monitoring.address instead int 127.0.0.1 enabled Deprecated - use monitoring.enabled instead boolean true path Deprecated - use monitoring.metricsPath instead string /metrics port Deprecated - use monitoring.port instead int 6000 publicURL Deprecated - use monitoring.publicURL instead URL string <nil> readTimeout Deprecated - use monitoring.readTimeout instead time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout Deprecated - use monitoring.writeTimeout instead time.Duration 15s"},{"location":"reference/config/#metricsauth","title":"metrics.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#metricsauthbasic","title":"metrics.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#metricstls","title":"metrics.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#monitoring","title":"monitoring","text":"Key Description Type Default Value address The IP address on which the metrics HTTP API should listen int 127.0.0.1 enabled Enables the metrics API boolean false metricsPath The path from which to serve the Prometheus metrics string /metrics port The port on which the metrics HTTP API should listen int 6000 publicURL The fully qualified public URL for the metrics API. This is used for building URLs in HTTP responses and in OpenAPI Spec generation URL string <nil> readTimeout The maximum time to wait when reading from an HTTP connection time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout The maximum time to wait when writing to an HTTP connection time.Duration 15s"},{"location":"reference/config/#monitoringauth","title":"monitoring.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#monitoringauthbasic","title":"monitoring.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#monitoringtls","title":"monitoring.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#namespaces","title":"namespaces","text":"Key Description Type Default Value default The default namespace - must be in the predefined list string default predefined A list of namespaces to ensure exists, without requiring a broadcast from the network List string <nil>"},{"location":"reference/config/#namespacespredefined","title":"namespaces.predefined[]","text":"Key Description Type Default Value defaultKey A default signing key for blockchain transactions within this namespace string <nil> description A description for the namespace string <nil> name The name of the namespace (must be unique) string <nil> plugins The list of plugins for this namespace string <nil>"},{"location":"reference/config/#namespacespredefinedassetmanager","title":"namespaces.predefined[].asset.manager","text":"Key Description Type Default Value keyNormalization Mechanism to normalize keys before using them. Valid options are blockchain_plugin - use blockchain plugin (default) or none - do not attempt normalization string <nil>"},{"location":"reference/config/#namespacespredefinedmultiparty","title":"namespaces.predefined[].multiparty","text":"Key Description Type Default Value enabled Enables multi-party mode for this namespace (defaults to true if an org name or key is configured, either here or at the root level) boolean <nil> networknamespace The shared namespace name to be sent in multiparty messages, if it differs from the local namespace name string <nil>"},{"location":"reference/config/#namespacespredefinedmultipartycontract","title":"namespaces.predefined[].multiparty.contract[]","text":"Key Description Type Default Value firstEvent The first event the contract should process. Valid options are oldest or newest string <nil> location A blockchain-specific contract location. For example, an Ethereum contract address, or a Fabric chaincode name and channel string <nil> options Blockchain-specific contract options string <nil>"},{"location":"reference/config/#namespacespredefinedmultipartynode","title":"namespaces.predefined[].multiparty.node","text":"Key Description Type Default Value description A description for the node in this namespace string <nil> name The node name for this namespace string <nil>"},{"location":"reference/config/#namespacespredefinedmultipartyorg","title":"namespaces.predefined[].multiparty.org","text":"Key Description Type Default Value description A description for the local root organization within this namespace string <nil> key The signing key allocated to the root organization within this namespace string <nil> name A short name for the local root organization within this namespace string <nil>"},{"location":"reference/config/#namespacespredefinedtlsconfigs","title":"namespaces.predefined[].tlsConfigs[]","text":"Key Description Type Default Value name Name of the TLS Config string <nil>"},{"location":"reference/config/#namespacespredefinedtlsconfigstls","title":"namespaces.predefined[].tlsConfigs[].tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#namespacesretry","title":"namespaces.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 5s maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#node","title":"node","text":"Key Description Type Default Value description The description of this FireFly node string <nil> name The name of this FireFly node string <nil>"},{"location":"reference/config/#opupdateretry","title":"opupdate.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initialDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 1m"},{"location":"reference/config/#opupdateworker","title":"opupdate.worker","text":"Key Description Type Default Value batchMaxInserts The maximum number of database inserts to include when writing a single batch of messages + data int 200 batchTimeout How long to wait for more messages to arrive before flushing the batch time.Duration 50ms count The number of operation update works int 5 queueLength The size of the queue for the Operation Update worker int 50"},{"location":"reference/config/#orchestrator","title":"orchestrator","text":"Key Description Type Default Value startupAttempts The number of times to attempt to connect to core infrastructure on startup string 5"},{"location":"reference/config/#org","title":"org","text":"Key Description Type Default Value description A description of the organization to which this FireFly node belongs (deprecated - should be set on each multi-party namespace instead) string <nil> key The signing key allocated to the organization (deprecated - should be set on each multi-party namespace instead) string <nil> name The name of the organization to which this FireFly node belongs (deprecated - should be set on each multi-party namespace instead) string <nil>"},{"location":"reference/config/#plugins","title":"plugins","text":"Key Description Type Default Value auth Authorization plugin configuration map[string]string <nil> blockchain The list of configured Blockchain plugins string <nil> database The list of configured Database plugins string <nil> dataexchange The array of configured Data Exchange plugins string <nil> identity The list of available Identity plugins string <nil> sharedstorage The list of configured Shared Storage plugins string <nil> tokens The token plugin configurations string <nil>"},{"location":"reference/config/#pluginsauth","title":"plugins.auth[]","text":"Key Description Type Default Value name The name of the auth plugin to use string <nil> type The type of the auth plugin to use string <nil>"},{"location":"reference/config/#pluginsauthbasic","title":"plugins.auth[].basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#pluginsblockchain","title":"plugins.blockchain[]","text":"Key Description Type Default Value name The name of the configured Blockchain plugin string <nil> type The type of the configured Blockchain Connector plugin string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolver","title":"plugins.blockchain[].ethereum.addressResolver","text":"Key Description Type Default Value alwaysResolve Causes the address resolver to be invoked on every API call that submits a signing key, regardless of whether the input string conforms to an 0x address. Also disables any result caching boolean <nil> bodyTemplate The body go template string to use when making HTTP requests. The template input contains '.Key' and '.Intent' string variables. Go Template string <nil> connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 method The HTTP method to use when making requests to the Address Resolver string GET passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s responseField The name of a JSON field that is provided in the response, that contains the ethereum address (default address) string address retainOriginal When true the original pre-resolved string is retained after the lookup, and passed down to Ethconnect as the from address boolean <nil> tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the Address Resolver string <nil> urlTemplate The URL Go template string to use when calling the Address Resolver. The template input contains '.Key' and '.Intent' string variables. Go Template string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverauth","title":"plugins.blockchain[].ethereum.addressResolver.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverproxy","title":"plugins.blockchain[].ethereum.addressResolver.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the Address Resolver URL string <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverretry","title":"plugins.blockchain[].ethereum.addressResolver.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainethereumaddressresolverthrottle","title":"plugins.blockchain[].ethereum.addressResolver.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainethereumaddressresolvertls","title":"plugins.blockchain[].ethereum.addressResolver.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnect","title":"plugins.blockchain[].ethereum.ethconnect","text":"Key Description Type Default Value batchSize The number of events Ethconnect should batch together for delivery to FireFly core. Only applies when automatically creating a new event stream int 50 batchTimeout How long Ethconnect should wait for new events to arrive and fill a batch, before sending the batch to FireFly core. Only applies when automatically creating a new event stream time.Duration 500 connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s fromBlock The first event this FireFly instance should listen to from the BatchPin smart contract. Default=0. Only affects initial creation of the event stream Address string 0 headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms instance The Ethereum address of the FireFly BatchPin smart contract that has been deployed to the blockchain Address string <nil> maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false prefixLong The prefix that will be used for Ethconnect specific HTTP headers when FireFly makes requests to Ethconnect string firefly prefixShort The prefix that will be used for Ethconnect specific query parameters when FireFly makes requests to Ethconnect string fly requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s topic The websocket listen topic that the node should register on, which is important if there are multiple nodes using a single ethconnect string <nil> url The URL of the Ethconnect instance URL string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectauth","title":"plugins.blockchain[].ethereum.ethconnect.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectbackgroundstart","title":"plugins.blockchain[].ethereum.ethconnect.backgroundStart","text":"Key Description Type Default Value enabled Start the Ethconnect plugin in the background and enter retry loop if failed to start boolean <nil> factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the ethereum plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the ethereum plugin time.Duration 1m"},{"location":"reference/config/#pluginsblockchainethereumethconnectproxy","title":"plugins.blockchain[].ethereum.ethconnect.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to Ethconnect URL string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectretry","title":"plugins.blockchain[].ethereum.ethconnect.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainethereumethconnectthrottle","title":"plugins.blockchain[].ethereum.ethconnect.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnecttls","title":"plugins.blockchain[].ethereum.ethconnect.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainethereumethconnectws","title":"plugins.blockchain[].ethereum.ethconnect.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsblockchainethereumfftm","title":"plugins.blockchain[].ethereum.fftm","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the FireFly Transaction Manager runtime, if enabled string <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmauth","title":"plugins.blockchain[].ethereum.fftm.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmproxy","title":"plugins.blockchain[].ethereum.fftm.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the Transaction Manager string <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmretry","title":"plugins.blockchain[].ethereum.fftm.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainethereumfftmthrottle","title":"plugins.blockchain[].ethereum.fftm.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainethereumfftmtls","title":"plugins.blockchain[].ethereum.fftm.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnect","title":"plugins.blockchain[].fabric.fabconnect","text":"Key Description Type Default Value batchSize The number of events Fabconnect should batch together for delivery to FireFly core. Only applies when automatically creating a new event stream int 50 batchTimeout The maximum amount of time to wait for a batch to complete time.Duration 500 chaincode The name of the Fabric chaincode that FireFly will use for BatchPin transactions (deprecated - use fireflyContract[].chaincode) string <nil> channel The Fabric channel that FireFly will use for BatchPin transactions string <nil> connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false prefixLong The prefix that will be used for Fabconnect specific HTTP headers when FireFly makes requests to Fabconnect string firefly prefixShort The prefix that will be used for Fabconnect specific query parameters when FireFly makes requests to Fabconnect string fly requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s signer The Fabric signing key to use when submitting transactions to Fabconnect string <nil> tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s topic The websocket listen topic that the node should register on, which is important if there are multiple nodes using a single Fabconnect string <nil> url The URL of the Fabconnect instance URL string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectauth","title":"plugins.blockchain[].fabric.fabconnect.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectbackgroundstart","title":"plugins.blockchain[].fabric.fabconnect.backgroundStart","text":"Key Description Type Default Value enabled Start the fabric plugin in the background and enter retry loop if failed to start boolean <nil> factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the fabric plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the fabric plugin time.Duration 1m"},{"location":"reference/config/#pluginsblockchainfabricfabconnectproxy","title":"plugins.blockchain[].fabric.fabconnect.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to Fabconnect URL string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectretry","title":"plugins.blockchain[].fabric.fabconnect.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchainfabricfabconnectthrottle","title":"plugins.blockchain[].fabric.fabconnect.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnecttls","title":"plugins.blockchain[].fabric.fabconnect.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchainfabricfabconnectws","title":"plugins.blockchain[].fabric.fabconnect.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsblockchaintezosaddressresolver","title":"plugins.blockchain[].tezos.addressResolver","text":"Key Description Type Default Value alwaysResolve Causes the address resolver to be invoked on every API call that submits a signing key. Also disables any result caching boolean <nil> bodyTemplate The body go template string to use when making HTTP requests Go Template string <nil> connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 method The HTTP method to use when making requests to the Address Resolver string GET passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s responseField The name of a JSON field that is provided in the response, that contains the tezos address (default address) string address retainOriginal When true the original pre-resolved string is retained after the lookup, and passed down to Tezosconnect as the from address boolean <nil> tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the Address Resolver string <nil> urlTemplate The URL Go template string to use when calling the Address Resolver. The template input contains '.Key' and '.Intent' string variables. Go Template string <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverauth","title":"plugins.blockchain[].tezos.addressResolver.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverproxy","title":"plugins.blockchain[].tezos.addressResolver.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to connect through string <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverretry","title":"plugins.blockchain[].tezos.addressResolver.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchaintezosaddressresolverthrottle","title":"plugins.blockchain[].tezos.addressResolver.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchaintezosaddressresolvertls","title":"plugins.blockchain[].tezos.addressResolver.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnect","title":"plugins.blockchain[].tezos.tezosconnect","text":"Key Description Type Default Value batchSize The number of events Tezosconnect should batch together for delivery to FireFly core. Only applies when automatically creating a new event stream int 50 batchTimeout How long Tezosconnect should wait for new events to arrive and fill a batch, before sending the batch to FireFly core. Only applies when automatically creating a new event stream time.Duration 500 connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false prefixLong The prefix that will be used for Tezosconnect specific HTTP headers when FireFly makes requests to Tezosconnect string firefly prefixShort The prefix that will be used for Tezosconnect specific query parameters when FireFly makes requests to Tezosconnect string fly requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s topic The websocket listen topic that the node should register on, which is important if there are multiple nodes using a single tezosconnect string <nil> url The URL of the Tezosconnect instance URL string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectauth","title":"plugins.blockchain[].tezos.tezosconnect.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectbackgroundstart","title":"plugins.blockchain[].tezos.tezosconnect.backgroundStart","text":"Key Description Type Default Value enabled Start the Tezosconnect plugin in the background and enter retry loop if failed to start boolean <nil> factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the tezos plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the tezos plugin time.Duration 1m"},{"location":"reference/config/#pluginsblockchaintezostezosconnectproxy","title":"plugins.blockchain[].tezos.tezosconnect.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to Tezosconnect URL string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectretry","title":"plugins.blockchain[].tezos.tezosconnect.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsblockchaintezostezosconnectthrottle","title":"plugins.blockchain[].tezos.tezosconnect.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnecttls","title":"plugins.blockchain[].tezos.tezosconnect.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsblockchaintezostezosconnectws","title":"plugins.blockchain[].tezos.tezosconnect.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsdatabase","title":"plugins.database[]","text":"Key Description Type Default Value name The name of the Database plugin string <nil> type The type of the configured Database plugin string <nil>"},{"location":"reference/config/#pluginsdatabasepostgres","title":"plugins.database[].postgres","text":"Key Description Type Default Value maxConnIdleTime The maximum amount of time a database connection can be idle time.Duration 1m maxConnLifetime The maximum amount of time to keep a database connection open time.Duration <nil> maxConns Maximum connections to the database int 50 maxIdleConns The maximum number of idle connections to the database int <nil> url The PostgreSQL connection string for the database string <nil>"},{"location":"reference/config/#pluginsdatabasepostgresmigrations","title":"plugins.database[].postgres.migrations","text":"Key Description Type Default Value auto Enables automatic database migrations boolean false directory The directory containing the numerically ordered migration DDL files to apply to the database string ./db/migrations/postgres"},{"location":"reference/config/#pluginsdatabasesqlite3","title":"plugins.database[].sqlite3","text":"Key Description Type Default Value maxConnIdleTime The maximum amount of time a database connection can be idle time.Duration 1m maxConnLifetime The maximum amount of time to keep a database connection open time.Duration <nil> maxConns Maximum connections to the database int 1 maxIdleConns The maximum number of idle connections to the database int <nil> url The SQLite connection string for the database string <nil>"},{"location":"reference/config/#pluginsdatabasesqlite3migrations","title":"plugins.database[].sqlite3.migrations","text":"Key Description Type Default Value auto Enables automatic database migrations boolean false directory The directory containing the numerically ordered migration DDL files to apply to the database string ./db/migrations/sqlite"},{"location":"reference/config/#pluginsdataexchange","title":"plugins.dataexchange[]","text":"Key Description Type Default Value name The name of the configured Data Exchange plugin string <nil> type The Data Exchange plugin to use string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdx","title":"plugins.dataexchange[].ffdx","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms initEnabled Instructs FireFly to always post all current nodes to the /init API before connecting or reconnecting to the connector boolean false manifestEnabled Determines whether to require+validate a manifest from other DX instances in the network. Must be supported by the connector string false maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the Data Exchange instance URL string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxauth","title":"plugins.dataexchange[].ffdx.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxbackgroundstart","title":"plugins.dataexchange[].ffdx.backgroundStart","text":"Key Description Type Default Value enabled Start the data exchange plugin in the background and enter retry loop if failed to start boolean false factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the data exchange plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the data exchange plugin time.Duration 1m"},{"location":"reference/config/#pluginsdataexchangeffdxeventretry","title":"plugins.dataexchange[].ffdx.eventRetry","text":"Key Description Type Default Value factor The retry backoff factor, for event processing float32 2 initialDelay The initial retry delay, for event processing time.Duration 50ms maxDelay The maximum retry delay, for event processing time.Duration 30s"},{"location":"reference/config/#pluginsdataexchangeffdxproxy","title":"plugins.dataexchange[].ffdx.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the Data Exchange URL string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxretry","title":"plugins.dataexchange[].ffdx.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginsdataexchangeffdxthrottle","title":"plugins.dataexchange[].ffdx.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxtls","title":"plugins.dataexchange[].ffdx.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginsdataexchangeffdxws","title":"plugins.dataexchange[].ffdx.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#pluginsidentity","title":"plugins.identity[]","text":"Key Description Type Default Value name The name of a configured Identity plugin string <nil> type The type of a configured Identity plugin string <nil>"},{"location":"reference/config/#pluginssharedstorage","title":"plugins.sharedstorage[]","text":"Key Description Type Default Value name The name of the Shared Storage plugin to use string <nil> type The Shared Storage plugin to use string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapi","title":"plugins.sharedstorage[].ipfs.api","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL for the IPFS API URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapiauth","title":"plugins.sharedstorage[].ipfs.api.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapiproxy","title":"plugins.sharedstorage[].ipfs.api.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the IPFS API URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapiretry","title":"plugins.sharedstorage[].ipfs.api.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginssharedstorageipfsapithrottle","title":"plugins.sharedstorage[].ipfs.api.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginssharedstorageipfsapitls","title":"plugins.sharedstorage[].ipfs.api.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgateway","title":"plugins.sharedstorage[].ipfs.gateway","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL for the IPFS Gateway URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewayauth","title":"plugins.sharedstorage[].ipfs.gateway.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewayproxy","title":"plugins.sharedstorage[].ipfs.gateway.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the IPFS Gateway URL string <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewayretry","title":"plugins.sharedstorage[].ipfs.gateway.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginssharedstorageipfsgatewaythrottle","title":"plugins.sharedstorage[].ipfs.gateway.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginssharedstorageipfsgatewaytls","title":"plugins.sharedstorage[].ipfs.gateway.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginstokens","title":"plugins.tokens[]","text":"Key Description Type Default Value broadcastName The name to be used in broadcast messages related to this token plugin, if it differs from the local plugin name string <nil> name A name to identify this token plugin string <nil> type The type of the token plugin to use string <nil>"},{"location":"reference/config/#pluginstokensfftokens","title":"plugins.tokens[].fftokens","text":"Key Description Type Default Value connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted time.Duration 30s expectContinueTimeout See ExpectContinueTimeout in the Go docs time.Duration 1s headers Adds custom headers to HTTP requests map[string]string <nil> idleTimeout The max duration to hold a HTTP keepalive connection between calls time.Duration 475ms maxConnsPerHost The max number of connections, per unique hostname. Zero means no limit int 0 maxIdleConns The max number of idle connections to hold pooled int 100 maxIdleConnsPerHost The max number of idle connections, per unique hostname. Zero means net/http uses the default of only 2. int 100 passthroughHeadersEnabled Enable passing through the set of allowed HTTP request headers boolean false requestTimeout The maximum amount of time that a request is allowed to remain open time.Duration 30s tlsHandshakeTimeout The maximum amount of time to wait for a successful TLS handshake time.Duration 10s url The URL of the token connector URL string <nil>"},{"location":"reference/config/#pluginstokensfftokensauth","title":"plugins.tokens[].fftokens.auth","text":"Key Description Type Default Value password Password string <nil> username Username string <nil>"},{"location":"reference/config/#pluginstokensfftokensbackgroundstart","title":"plugins.tokens[].fftokens.backgroundStart","text":"Key Description Type Default Value enabled Start the tokens plugin in the background and enter retry loop if failed to start boolean false factor Set the factor by which the delay increases when retrying float32 2 initialDelay Delay between restarts in the case where we retry to restart the token plugin time.Duration 5s maxDelay Max delay between restarts in the case where we retry to restart the token plugin time.Duration 1m"},{"location":"reference/config/#pluginstokensfftokenseventretry","title":"plugins.tokens[].fftokens.eventRetry","text":"Key Description Type Default Value factor The retry backoff factor, for event processing float32 2 initialDelay The initial retry delay, for event processing time.Duration 50ms maxDelay The maximum retry delay, for event processing time.Duration 30s"},{"location":"reference/config/#pluginstokensfftokensproxy","title":"plugins.tokens[].fftokens.proxy","text":"Key Description Type Default Value url Optional HTTP proxy server to use when connecting to the token connector URL string <nil>"},{"location":"reference/config/#pluginstokensfftokensretry","title":"plugins.tokens[].fftokens.retry","text":"Key Description Type Default Value count The maximum number of times to retry int 5 enabled Enables retries boolean false errorStatusCodeRegex The regex that the error response status code must match to trigger retry string <nil> initWaitTime The initial retry delay time.Duration 250ms maxWaitTime The maximum retry delay time.Duration 30s"},{"location":"reference/config/#pluginstokensfftokensthrottle","title":"plugins.tokens[].fftokens.throttle","text":"Key Description Type Default Value burst The maximum number of requests that can be made in a short period of time before the throttling kicks in. int <nil> requestsPerSecond The average rate at which requests are allowed to pass through over time. int <nil>"},{"location":"reference/config/#pluginstokensfftokenstls","title":"plugins.tokens[].fftokens.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#pluginstokensfftokensws","title":"plugins.tokens[].fftokens.ws","text":"Key Description Type Default Value connectionTimeout The amount of time to wait while establishing a connection (or auto-reconnection) time.Duration 45s heartbeatInterval The amount of time to wait between heartbeat signals on the WebSocket connection time.Duration 30s initialConnectAttempts The number of attempts FireFly will make to connect to the WebSocket when starting up, before failing int 5 path The WebSocket sever URL to which FireFly should connect WebSocket URL string <nil> readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb url URL to use for WebSocket - overrides url one level up (in the HTTP config) string <nil> writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#privatemessagingbatch","title":"privatemessaging.batch","text":"Key Description Type Default Value agentTimeout How long to keep around a batching agent for a sending identity before disposal time.Duration 2m payloadLimit The maximum payload size of a private message Data Exchange payload BytesSize 800Kb size The maximum number of messages in a batch for private messages int 200 timeout The timeout to wait for a batch to fill, before sending time.Duration 1s"},{"location":"reference/config/#privatemessagingretry","title":"privatemessaging.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 100ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#spi","title":"spi","text":"Key Description Type Default Value address The IP address on which the admin HTTP API should listen IP Address string 127.0.0.1 enabled Enables the admin HTTP API boolean false port The port on which the admin HTTP API should listen int 5001 publicURL The fully qualified public URL for the admin API. This is used for building URLs in HTTP responses and in OpenAPI Spec generation URL string <nil> readTimeout The maximum time to wait when reading from an HTTP connection time.Duration 15s shutdownTimeout The maximum amount of time to wait for any open HTTP requests to finish before shutting down the HTTP server time.Duration 10s writeTimeout The maximum time to wait when writing to an HTTP connection time.Duration 15s"},{"location":"reference/config/#spiauth","title":"spi.auth","text":"Key Description Type Default Value type The auth plugin to use for server side authentication of requests string <nil>"},{"location":"reference/config/#spiauthbasic","title":"spi.auth.basic","text":"Key Description Type Default Value passwordfile The path to a .htpasswd file to use for authenticating requests. Passwords should be hashed with bcrypt. string <nil>"},{"location":"reference/config/#spitls","title":"spi.tls","text":"Key Description Type Default Value ca The TLS certificate authority in PEM format (this option is ignored if caFile is also set) string <nil> caFile The path to the CA file for TLS on this API string <nil> cert The TLS certificate in PEM format (this option is ignored if certFile is also set) string <nil> certFile The path to the certificate file for TLS on this API string <nil> clientAuth Enables or disables client auth for TLS on this API string <nil> enabled Enables or disables TLS on this API boolean false insecureSkipHostVerify When to true in unit test development environments to disable TLS verification. Use with extreme caution boolean <nil> key The TLS certificate key in PEM format (this option is ignored if keyFile is also set) string <nil> keyFile The path to the private key file for TLS on this API string <nil> requiredDNAttributes A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes) map[string]string <nil>"},{"location":"reference/config/#spiws","title":"spi.ws","text":"Key Description Type Default Value blockedWarnInterval How often to log warnings in core, when an admin change event listener falls behind the stream they requested and misses events time.Duration 1m eventQueueLength Server-side queue length for events waiting for delivery over an admin change event listener websocket int 250 readBufferSize The size in bytes of the read buffer for the WebSocket connection BytesSize 16Kb writeBufferSize The size in bytes of the write buffer for the WebSocket connection BytesSize 16Kb"},{"location":"reference/config/#subscription","title":"subscription","text":"Key Description Type Default Value max The maximum number of pre-defined subscriptions that can exist (note for high fan-out consider connecting a dedicated pub/sub broker to the dispatcher) int 500"},{"location":"reference/config/#subscriptiondefaults","title":"subscription.defaults","text":"Key Description Type Default Value batchSize Default read ahead to enable for subscriptions that do not explicitly configure readahead int 50 batchTimeout Default batch timeout int 50ms"},{"location":"reference/config/#subscriptionevents","title":"subscription.events","text":"Key Description Type Default Value maxScanLength The maximum number of events a search for historical events matching a subscription will index from the database int 1000"},{"location":"reference/config/#subscriptionretry","title":"subscription.retry","text":"Key Description Type Default Value factor The retry backoff factor float32 2 initDelay The initial retry delay time.Duration 250ms maxDelay The maximum retry delay time.Duration 30s"},{"location":"reference/config/#transactionwriter","title":"transaction.writer","text":"Key Description Type Default Value batchMaxTransactions The maximum number of transaction inserts to include in a batch int 100 batchTimeout How long to wait for more transactions to arrive before flushing the batch time.Duration 10ms count The number of message writer workers int 5"},{"location":"reference/config/#ui","title":"ui","text":"Key Description Type Default Value enabled Enables the web user interface boolean true path The file system path which contains the static HTML, CSS, and JavaScript files for the user interface string <nil>"},{"location":"reference/events/","title":"Event Bus","text":""},{"location":"reference/events/#hyperledger-firefly-event-bus","title":"Hyperledger FireFly Event Bus","text":"The FireFly event bus provides your application with a single stream of events from all of the back-end services that plug into FireFly.
Applications subscribe to these events using developer friendly protocols like WebSockets, and Webhooks. Additional transports and messaging systems like NATS, Kafka, and JMS Servers can be connected through plugins.
Each application creates one or more Subscriptions to identify itself. In this subscription the application can choose to receive all events that are emitted within a namespace, or can use server-side filtering to only receive a sub-set of events.
The event bus reliably keeps track of which events have been delivered to which applications, via an offset into the main event stream that is updated each time an application acknowledges receipt of events over its subscription.
Decentralized applications are built around a source of truth that is shared between multiple parties. No one party can change the state unilaterally, as their changes need to be processed in order with the other changes in the system. Each party processes requests to change shared state in the same order, against a common set of rules for what is allowed at that exact point in the processing. As a result everybody deterministically ends up with the same state at the end of the processing.
This requires an event-driven programming model.
You will find an event-driven model at the core of every blockchain Smart Contract technology.
This event-driven approach is unavoidable regardless of how much of your business data & logic can be directly stored/processed on-chain, vs. off-chain.
So Hyperledger FireFly aims to provide you with the tools to easily manage this model throughout your decentralized application stack.
Your back-end application should be structured for this event-driven paradigm, with an Event Handler constantly listening for events, applying a consistent State Machine to those events and applying the changes to your Application Database.
FireFly comes with a built in event processor for Token transfers & approvals, that implements this pattern to maintain balances, and transaction history in a rich query off-chain data cache.
"},{"location":"reference/events/#decentralized-event-processing","title":"Decentralized Event Processing","text":"In a decentralized system, you need to consider that each organization runs its own applications, and has its own private database.
At any given point in time different organizations will have slightly different views of what the most up to date information is - even for the blockchain state.
As well as the agreed business logic, there will be private data and core system integration that are needed to process events as they happen. Some of this data might be received privately from other parties, over a secure communications channel (not the blockchain).
The system must be eventually consistent across all parties for any business data/decision that those parties need to agree on. This happens by all parties processing the same events in the same order, and by applying the same business logic (for the parts of the business logic that are agreed).
This means that when processing an event, a participant must have access to enough historical data/state to reach the same conclusion as everyone else.
Let's look at a couple of examples.
"},{"location":"reference/events/#example-1-a-fungible-token-balance-transfer","title":"Example 1: A fungible token balance transfer","text":"You need to be able to verify the complete lineage of the tokens being spent, in order to know that they cannot be double spent anywhere in the network.
This means the transaction must be backed by a blockchain verifiable by all participants on the network that could hold balances of that token.
You might be able to use advanced cryptography (such as zero-knowledge proofs) to mask the participants in the trade, but the transaction themselves must be verifiable to everyone in a global sequence that prevents double spending.
"},{"location":"reference/events/#example-2-a-step-in-a-multi-party-business-process","title":"Example 2: A step in a multi-party business process","text":"Here it is likely you want to restrict visibility of the data to just the parties directly involved in the business process.
To come to a common agreement on outcome, the parties must know they are processing the same data in the same order. So at minimum a proof (a hash of the data) needs to \"pinned\" to a blockchain ledger visible to all participants involved in the process.
You can then choose to put more processing on the blockchain, to enforce some critical rules in the business state machine that must be executed fairly to prevent one party from cheating the system. Such as that the highest bid is chosen in a competitive bidding process, or a minimum set of parties have voted agreement before a transaction is finalized.
Other steps in the process might include human decision making, private data from the core systems of one member, or proprietary business logic that one member is not willing to share. These steps are \"non-deterministic\" - you cannot predict the outcome, nor be guaranteed to reproduce the same outcome with the same inputs in the future.
The FireFly event bus is designed to make triggering these non-deterministic steps easy, while still allowing them to be part of the overall state machine of the business process. You need to take care that the system is designed so parties cannot cheat, and must follow the rules. How much of that rule enforcement needs to be executed on-chain vs. off-chain (backed by a deterministic order through the blockchain) is different for each use case.
Remember that tokens provide a great set of building blocks for on-chain steps in your decentralized applications. Enterprise NFTs allow generation of a globally unique ID, and track ownership. Fungible tokens allow value transfer, and can be extended with smart contracts that to lock/unlock funds in \"digital escrow\" while complex off-chain agreement happens.
"},{"location":"reference/events/#privacy-groups-and-late-join","title":"Privacy groups and late join","text":"If a new participant needs to join into a business transaction that has already started, they must first \"catch up\" with the current state before they can play their part. In a real-world scenario they might not be allowed to see all the data that's visible to the other parties, so it is common to create a new stream of communications that includes all of the existing parties, plus the new party, to continue the process.
If you use the same blockchain to back both groups, then you can safely order business process steps that involve different parties across these overlapping groups of participants.
Using a single Ethereum permissioned side-chain for example.
Alternatively, you can create dedicated distributed ledgers (DLTs) for communication between these groups of participants. This can allow more logic and data to go on-chain directly, although you still must consider the fact that this data is immutable and can never be deleted.
Using Hyperledger Fabric channels for example.
On top of either type of ledger, FireFly provides a private Group construct to facilitate secure off-chain data exchanges, and to efficiently pin these communications to the blockchain in batches.
These private data exchanges can also be coordinated with most sophisticated on-chain transactions, such as token transfers.
"},{"location":"reference/events/#event-types","title":"Event Types","text":"FireFly provides a number of different types of events to your application, designed to allow you to build your application state machine quickly and reliably.
All events in FireFly share a common base structure, regardless of their type. They are then linked (via a reference) to an object that contains detailed information.
The categories of event your application can receive are as follows:
See the Core Resources/Event page for a full list of event types, and more details on the data you can expect for each type.
"},{"location":"reference/events/#blockchain-events","title":"Blockchain events","text":"FireFly allows your application to subscribe to any event from a blockchain smart contract.
In order for applications connected to the FireFly API to receive blockchain events from a smart contracts, a ContractListener fist must be created to instruct FireFly to listen to those events from the blockchain (via the blockchain plugin).
Once you have configured the blockchain event listener, every event detected from the blockchain will result in a FireFly event delivered to your application of type blockchain_event_received.
As of 1.3.1 a group of event filters can be established under a single topic when supported by the connector, which has benefits for ordering. See Contract Listeners for more detail
Check out the Custom Contracts Tutorial for a walk-through of how to set up listeners for the events from your smart contracts.
FireFly automatically establishes listeners for some blockchain events:
Events from the FireFly BatchPin contract that is used to pin identities, off-chain data broadcast and private messaging to the blockchain.
Events from Token contracts, for which a Token Pool has been configured. These events are detected indirectly via the token connector.
FireFly provides a Wallet API, that is pluggable to multiple token implementations without needing to change your app.
The pluggable API/Event interface allows all kinds of technical implementations of tokens to be fitted into a common framework.
The following wallet operations are supported. These are universal to all token implementations - NFTs and fungible tokens alike:
FireFly processes, indexes and stores the events associated with these actions, for any Token Pool that has been configured on the FireFly node.
See Token Transfer and Token Approval for more information on the individual operations.
The token connector is responsible for mapping from the raw Blockchain Events, to the FireFly model for tokens. Reference token connector implementations are provided for common interface standards implemented by tokens - like ERC-20, ERC-721 and ERC-115.
A particular token contract might have many additional features that are unique to that contract, particularly around governance. For these you would use the Smart Contract features of FireFly to interact with the blockchain API and Events directly.
"},{"location":"reference/events/#message-events-on-chain-off-chain-coordinated","title":"Message events: on-chain / off-chain coordinated","text":"Event aggregation between data arriving off-chain, and the associated ordered proof/transaction events being confirmed on-chain, is a complex orchestration task.
The universal order and additional transaction logic on-chain must be the source of truth for when and how an event is processed.
However, that event cannot be processed until the off-chain private/broadcast data associated with that event is also available and verified against the on-chain hash of that additional data.
They might arrive in any order, and no further events can be processed on that business transaction until the data is available.
Multiple parties might be emitting events as part of the business transaction, and the outcome will only be assured to be the same by all parties if they process these events in the same order.
Hyperledger FireFly handles this for you. Events related to a message are not emitted until both the on-chain and off-chain parts (including large binary attachments) are available+verified in your local FireFly node, and all previous messages on the same topic have been processed successfully by your application.
Your application just needs to:
topic for your messages that determines the ordered stream it is part of. Such as a business transaction identifier.See Message for more information
"},{"location":"reference/events/#transaction-submission-events","title":"Transaction submission events","text":"These events are emitted each time a new transaction is initiated via the Firefly API.
These events are only emitted on the local FireFly node that initiates an activity.
For more information about FireFly Transactions, and how they relate to blockchain transactions, see Transaction.
"},{"location":"reference/firefly_interface_format/","title":"FireFly Interface Format","text":"FireFly defines a common, blockchain agnostic way to describe smart contracts. This is referred to as a Contract Interface, and it is written in the FireFly Interface (FFI) format. It is a simple JSON document that has a name, a namespace, a version, a list of methods, and a list of events.
"},{"location":"reference/firefly_interface_format/#overview","title":"Overview","text":"There are four required fields when broadcasting a contract interface in FireFly: a name, a version, a list of methods, and a list of events. A namespace field will also be filled in automatically based on the URL path parameter. Here is an example of the structure of the required fields:
{\n \"name\": \"example\",\n \"version\": \"v1.0.0\",\n \"methods\": [],\n \"events\": []\n}\n NOTE: Contract interfaces are scoped to a namespace. Within a namespace each contract interface must have a unique name and version combination. The same name and version combination can exist in different namespaces simultaneously.
"},{"location":"reference/firefly_interface_format/#method","title":"Method","text":"Let's look at a what goes inside the methods array now. It is also a JSON object that has a name, a list of params which are the arguments the function will take and a list of returns which are the return values of the function. It also has an optional description which can be helpful in OpenAPI Spec generation. Finally, it has an optional details object which wraps blockchain specific information about this method. This can be used by the blockchain plugin when invoking this function, and it is also used in documentation generation.
{\n \"name\": \"add\",\n \"description\": \"Add two numbers together\",\n \"params\": [],\n \"returns\": [],\n \"details\": {}\n}\n"},{"location":"reference/firefly_interface_format/#event","title":"Event","text":"What goes into the events array is very similar. It is also a JSON object that has a name and a list of params. The difference is that events don't have returns. Arguments that are passed to the event when it is emitted are in params. It also has an optional description which can be helpful in OpenAPI Spec generation. Finally, it has an optional details object which wraps blockchain specific information about this event. This can be used by the blockchain plugin when invoking this function, and it is also used in documentation generation.
{\n \"name\": \"added\",\n \"description\": \"An event that occurs when numbers have been added\",\n \"params\": [],\n \"details\": {}\n}\n"},{"location":"reference/firefly_interface_format/#param","title":"Param","text":"Both methods, and events have lists of params or returns, and the type of JSON object that goes in each of these arrays is the same. It is simply a JSON object with a name and a schema. There is also an optional details field that is passed to the blockchain plugin for blockchain specific requirements.
{\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {}\n }\n}\n"},{"location":"reference/firefly_interface_format/#schema","title":"Schema","text":"The param schema is an important field which tells FireFly the type information about this particular field. This is used in several different places, such as OpenAPI Spec generation, API request validation, and blockchain request preparation.
The schema field accepts JSON Schema (version 2020-12) with several additional requirements:
type field is always mandatorybooleanintegerstringobjectarrayNOTE: Floats or decimals are not currently accepted because certain underlying blockchains (e.g. Ethereum) only allow integers
The type field here is the JSON input type when making a request to FireFly to invoke or query a smart contract. This type can be different from the actual blockchain type, usually specified in the details field, if there is a compatible type mapping between the two.
The details field is quite important in some cases. Because the details field is passed to the blockchain plugin, it is used to encapsulate blockchain specific type information about a particular field. Additionally, because each blockchain plugin can add rules to the list of schema requirements above, a blockchain plugin can enforce that certain fields are always present within the details field.
For example, the Ethereum plugin always needs to know what Solidity type the field is. It also defines several optional fields. A full Ethereum details field may look like:
{\n \"type\": \"uint256\",\n \"internalType\": \"uint256\",\n \"indexed\": false\n}\n"},{"location":"reference/firefly_interface_format/#automated-generation-of-firefly-interfaces","title":"Automated generation of FireFly Interfaces","text":"A convenience endpoint exists on the API to facilitate converting from native blockchain interface formats such as an Ethereum ABI to the FireFly Interface format. For details, please see the API documentation for the contract interface generation endpoint.
For an example of using this endpoint with a specific Ethereum contract, please see the Tutorial to Work with custom smart contracts.
"},{"location":"reference/firefly_interface_format/#full-example","title":"Full Example","text":"Putting it all together, here is a full example of the FireFly Interface format with all the fields filled in:
{\n \"namespace\": \"default\",\n \"name\": \"SimpleStorage\",\n \"description\": \"A simple smart contract that stores and retrieves an integer on-chain\",\n \"version\": \"v1.0.0\",\n \"methods\": [\n {\n \"name\": \"get\",\n \"description\": \"Retrieve the value of the stored integer\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"output\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"details\": {\n \"stateMutability\": \"viewable\"\n }\n },\n {\n \"name\": \"set\",\n \"description\": \"Set the stored value on-chain\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": [],\n \"details\": {\n \"stateMutability\": \"payable\"\n }\n }\n ],\n \"events\": [\n {\n \"name\": \"Changed\",\n \"description\": \"An event that is fired when the stored integer value changes\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"details\": {}\n }\n ]\n}\n"},{"location":"reference/idempotency/","title":"Idempotency Keys","text":""},{"location":"reference/idempotency/#idempotency","title":"Idempotency","text":"The transaction submission REST APIs of Hyperledger FireFly are idempotent.
Idempotent APIs allow an application to safely submit a request multiple times, and for the transaction to only be accepted and executed once.
This is the well accepted approach for REST APIs over HTTP/HTTPS to achieve resilience, as HTTP requests can fail in indeterminate ways. For example in a request or gateway timeout situation, the requester is unable to know whether the request will or will not eventually be processed.
There are various types of FireFly transaction that can be submitted. These include direct submission of blockchain transactions to a smart contract, as well as more complex transactions including coordination of multiple operations across on-chain and off-chain connectors.
In order for Hyperledger FireFly to deduplicate transactions, and make them idempotent, the application must supply an idempotencyKey on each API request.
The caller of the API specifies its own unique identifier (an arbitrary string up to 256 characters) that uniquely identifies the request, in the idempotencyKey field of the API.
So if there is a network connectivity failure, or an abrupt termination of either runtime, the application can safely attempt to resubmit the REST API call and be returned a 409 Conflict HTTP code.
Examples of how an app might construct such an idempotencyKey include:
Be careful of cases where the business data might not be unique - like a transfer of 10 coins from A to B.
Such a transfer could happen multiple times, and each would be a separate business transaction.
Where as transfer with invoice number abcd1234 of 10 coins from A to B would be assured to be unique.
This moves the challenge up one layer into your application. How does that unique ID get generated? Is that itself idempotent?
FireFly provides an idempotent interface downstream to connectors.
Each operation within a FireFly transaction receives a unique ID within the overall transaction that is used as an idempotency key when invoking that connector.
Well formed connectors honor this idempotency key internally, ensuring that the end-to-end transaction submission is idempotent.
Key examples of such connectors are EVMConnect and others built on the Blockchain Connector Toolkit.
When an operation is retried automatically, the same idempotency key is re-used to avoid resubmission.
"},{"location":"reference/idempotency/#short-term-retry","title":"Short term retry","text":"The FireFly core uses standard HTTP request code to communicate with all connector APIs.
This code include exponential backoff retry, that can be enabled with a simple boolean in the plugin of FireFly core. The minimum retry, maximum retry, and backoff factor can be tuned individually as well on each connector.
See Configuration Reference for more information.
"},{"location":"reference/idempotency/#administrative-operation-retry","title":"Administrative operation retry","text":"The operations/{operationId}/retry API can be called administratively to resubmit a transaction that has reached Failed status, or otherwise been determined by an operator/monitor to be unrecoverable within the connector.
In this case, the previous operation is marked Retried, a new operation ID is allocated, and the operation is re-submitted to the connector with this new ID.
Identities are a critical part of using FireFly in a multi-party system. Every party that joins a multi-party system must begin by claiming an on- and off-chain identity, which is described with a unique DID. Each type of identity is also associated with an on- or off-chain verifier, which can be used in some way to check the authorship of a piece of data. Together, these concepts form the backbone of the trust model for exchanging multi-party data.
"},{"location":"reference/identities/#types-of-identities","title":"Types of Identities","text":"There are three types of identities:
"},{"location":"reference/identities/#org","title":"org","text":"Organizations are the primary identity type in FireFly. They represent a logical on-chain signing identity, and the attached verifier is therefore a blockchain key (with the exact format depending on the blockchain being used). Every party in a multi-party system must claim a root organization identity as the first step to joining the network.
The root organization name and key must be defined in the FireFly config (once for every multi-party system). It can be claimed with a POST to /network/organizations/self.
Organizations may have child identities of any type.
"},{"location":"reference/identities/#node","title":"node","text":"Nodes represent a logical off-chain identity - and specifically, they are tied to an instance of a data exchange connector. The format of the attached verifier depends on the data exchange plugin being used, but it will be mapped to some validation provided by that plugin (ie the name of an X.509 certificate or similar). Every party in a multi-party system must claim a node identity when joining the network, which must be a child of one of its organization identities (but it is possible for many nodes to share a parent organization).
The node name must be defined in the FireFly config (once for every multi-party system). It can be claimed with a POST to /network/nodes/self.
Nodes must be a child of an organization, and cannot have any child identities of their own.
Note that \"nodes\" as an identity concept are distinct from FireFly supernodes, from underlying blockchain nodes, and from anywhere else the term \"node\" happens to be used.
"},{"location":"reference/identities/#custom","title":"custom","text":"Custom identities are similar to organizations, but are provided for applications to define their own more granular notions of identity. They are associated with an on-chain verifier in the same way as organizations.
They can only have child identities which are also of type \"custom\".
"},{"location":"reference/identities/#identity-claims","title":"Identity Claims","text":"Before an identity can be used within a multi-party system, it must be claimed. The identity claim is a special type of broadcast message sent by FireFly to establish an identity uniquely among the parties in the multi-party system. As with other broadcasts, this entails an on-chain transaction which contains a public reference to an off-chain piece of data (such as an IPFS reference) describing the details of the identity claim.
The claim data consists of information on the identity being claimed - such as the type, the DID, and the parent (if applicable). The DID must be unique and unclaimed. The verifier will be inferred from the message - for on-chain identities (org and custom), it is the blockchain key that was used to sign the on-chain portion of the message, while for off-chain identities (nodes), is is an identifier queried from data exchange.
For on-chain identities with a parent, two messages are actually required - the claim message signed with the new identity's blockchain key, as well as a separate verification message signed with the parent identity's blockchain key. Both messages must be received before the identity is confirmed.
"},{"location":"reference/identities/#messaging","title":"Messaging","text":"In the context of a multi-party system, FireFly provides capabilities for sending off-chain messages that are pinned to an on-chain proof. The sender of every message must therefore have an on-chain and off-chain identity. For private messages, every recipient must also have an on-chain and off-chain identity.
"},{"location":"reference/identities/#sender","title":"Sender","text":"When sending a message, the on-chain identity of the sender is controlled by the author and key fields.
author alone is specified, it should be the DID of an org or custom identity. The associated verifier will be looked up to use as the key.key alone is specified, it must match the registered blockchain verifier for an org or custom identity that was previously claimed. A reverse lookup will be used to populate the DID for the author.author and key are both specified, they will be used as-is (can be used to send private messages with an unregistered blockchain key).The resolved key will be used to sign the blockchain transaction, which establishes the sender's on-chain identity.
The sender's off-chain identity is always controlled by the node.name from the config along with the data exchange plugin.
When specifying private message recipients, each one has an identity and a node.
identity alone is specified, it should be the DID of an org or custom identity. The first node owned by that identity or one of its ancestors will be automatically selected.identity and node are specified, they will be used as-is. The node should be a child of the given identity or one of its ancestors.The node in this case will control how the off-chain portion of the message is routed via data exchange.
When a message is received, FireFly verifies the following:
author and key are specified in the message. The author must be a known org or custom identity. The key must match the blockchain key that was used to sign the on-chain portion of the message. For broadcast messages, the key must match the registered verifier for the author.node (as reported by data exchange) must be a known node identity which is a child of the message's author identity or one of its ancestors. The combination of the author identity and the node must also be found in the message group.In addition, the data exchange plugin is responsible for verifying the sending and receiving identities for the off-chain data (such as validating the relevant certificates).
"},{"location":"reference/namespaces/","title":"Namespaces","text":""},{"location":"reference/namespaces/#introduction-to-namespaces","title":"Introduction to Namespaces","text":"Namespaces are a construct for segregating data and operations within a FireFly supernode. Each namespace is an isolated environment within a FireFly runtime, that allows independent configuration of:
They can be thought of in two basic modes:
"},{"location":"reference/namespaces/#multi-party-namespaces","title":"Multi-party Namespaces","text":"This namespace is shared with one or more other FireFly nodes. It requires three types of communication plugins - blockchain, data exchange, and shared storage. Organization and node identities must be claimed with an identity broadcast when joining the namespace, which establishes credentials for blockchain and off-chain communication. Shared objects can be defined in the namespace (such as datatypes and token pools), and details of them will be implicitly broadcast to other members.
This type of namespace is used when multiple parties need to share on- and off-chain data and agree upon the ordering and authenticity of that data. For more information, see the multi-party system overview.
"},{"location":"reference/namespaces/#gateway-namespaces","title":"Gateway Namespaces","text":"Nothing in this namespace will be shared automatically, and no assumptions are made about whether other parties connected through this namespace are also using Hyperledger FireFly. Plugins for data exchange and shared storage are not supported. If any identities or definitions are created in this namespace, they will be stored in the local database, but will not be shared implicitly outside the node.
This type of namespace is mainly used when interacting directly with a blockchain, without assuming that the interaction needs to conform to FireFly's multi-party system model.
"},{"location":"reference/namespaces/#configuration","title":"Configuration","text":"FireFly nodes can be configured with one or many namespaces of different modes. This means that a single FireFly node can be used to interact with multiple distinct blockchains, multiple distinct token economies, and multiple business networks.
Below is an example plugin and namespace configuration containing both a multi-party and gateway namespace:
plugins:\n database:\n - name: database0\n type: sqlite3\n sqlite3:\n migrations:\n auto: true\n url: /etc/firefly/db?_busy_timeout=5000\n blockchain:\n - name: blockchain0\n type: ethereum\n ethereum:\n ethconnect:\n url: http://ethconnect_0:8080\n topic: \"0\"\n - name: blockchain1\n type: ethereum\n ethereum:\n ethconnect:\n url: http://ethconnect_01:8080\n topic: \"0\"\n dataexchange:\n - name: dataexchange0\n type: ffdx\n ffdx:\n url: http://dataexchange_0:3000\n sharedstorage:\n - name: sharedstorage0\n type: ipfs\n ipfs:\n api:\n url: http://ipfs_0:5001\n gateway:\n url: http://ipfs_0:8080\n tokens:\n - name: erc20_erc721\n broadcastName: erc20_erc721\n type: fftokens\n fftokens:\n url: http://tokens_0_0:3000\nnamespaces:\n default: alpha\n predefined:\n - name: alpha\n description: Default predefined namespace\n defaultKey: 0x123456\n plugins: [database0, blockchain0, dataexchange0, sharedstorage0, erc20_erc721]\n multiparty:\n networkNamespace: alpha\n enabled: true\n org:\n name: org0\n description: org0\n key: 0x123456\n node:\n name: node0\n description: node0\n contract:\n - location:\n address: 0x4ae50189462b0e5d52285f59929d037f790771a6\n firstEvent: 0\n - location:\n address: 0x3c1bef20a7858f5c2f78bda60796758d7cafff27\n firstEvent: 5000\n - name: omega\n defaultkey: 0x48a54f9964d7ceede2d6a8b451bf7ad300c7b09f\n description: Gateway namespace\n plugins: [database0, blockchain1, erc20_erc721]\n The namespaces.predefined object contains the follow sub-keys:
defaultKey is a blockchain key used to sign transactions when none is specified (in multi-party mode, defaults to the org key)plugins is an array of plugin names to be activated for this namespace (defaults to all available plugins if omitted)multiparty.networkNamespace is the namespace name to be sent in plugin calls, if it differs from the locally used name (useful for interacting with multiple shared namespaces of the same name - defaults to the value of name)multiparty.enabled controls if multi-party mode is enabled (defaults to true if an org key or org name is defined on this namespace or in the deprecated org section at the root)multiparty.org is the root org identity for this multi-party namespace (containing name, description, and key)multiparty.node is the local node identity for this multi-party namespace (containing name and description)multiparty.contract is an array of objects describing the location(s) of a FireFly multi-party smart contract. Its children are blockchain-agnostic location and firstEvent fields, with formats identical to the same fields on custom contract interfaces and contract listeners. The blockchain plugin will interact with the first contract in the list until instructions are received to terminate it and migrate to the next.name must be unique on this nodename or multiparty.networkNamespacedatabase plugin is required for every namespacemultiparty.enabled is true, plugins must include one each of blockchain, dataexchange, and sharedstoragemultiparty.enabled is false, plugins must not include dataexchange or sharedstorageAll namespaces must be called out in the FireFly config file in order to be valid. Namespaces found in the database but not represented in the config file will be ignored.
"},{"location":"reference/namespaces/#definitions","title":"Definitions","text":"In FireFly, definitions are immutable payloads that are used to define identities, datatypes, smart contract interfaces, token pools, and other constructs. Each type of definition in FireFly has a schema that it must adhere to. Some definitions also have a name and a version which must be unique within a namespace. In a multiparty namespace, definitions are broadcasted to other organizations.
"},{"location":"reference/namespaces/#local-definitions","title":"Local Definitions","text":"The following are all \"definition\" types in FireFly:
For gateway namespaces, the APIs which create these definitions will become an immediate local database insert, instead of performing a broadcast. Additional caveats:
To enable TLS in Firefly, there is a configuration available to provide certificates and keys.
The common configuration is as such:
tls:\n enabled: true/false # Toggle on or off TLS\n caFile: <path to the CA file you want the client or server to trust>\n certFile: <path to the cert file you want the client or server to use when performing authentication in mTLS>\n keyFile: <path to the priavte key file you want the client or server to use when performing authentication in mTLS>\n clientAuth: true/false # Only applicable to the server side, to toggle on or off client authentication\n requiredDNAttributes: A set of required subject DN attributes. Each entry is a regular expression, and the subject certificate must have a matching attribute of the specified type (CN, C, O, OU, ST, L, STREET, POSTALCODE, SERIALNUMBER are valid attributes)\n NOTE The CAs, certificates and keys have to be in PEM format.
"},{"location":"reference/tls/#configuring-tls-for-the-api-server","title":"Configuring TLS for the API server","text":"Using the above configuration, we can place it under the http config and enable TLS or mTLS for any API call.
See this config section for details
"},{"location":"reference/tls/#configuring-tls-for-the-webhooks","title":"Configuring TLS for the webhooks","text":"Using the above configuration, we can place it under the events.webhooks config and enable TLS or mTLS for any webhook call.
See this config section for details
"},{"location":"reference/tls/#configuring-clients-and-websockets","title":"Configuring clients and websockets","text":"Firefly has a set of HTTP clients and websockets that communicate the external endpoints and services that could be secured using TLS. In order to configure these clients, we can use the same configuration as above in the respective places in the config which relate to those clients.
For example, if you wish to configure the ethereum blockchain connector with TLS you would look at this config section
For more clients, search in the configuration reference for a TLS section.
"},{"location":"reference/tls/#enhancing-validation-of-certificates","title":"Enhancing validation of certificates","text":"In the case where we want to verify that a specific client certificate has certain attributes we can use the requiredDNAtributes configuration as described above. This will allow you by the means of a regex expresssion matching against well known distinguished names (DN). To learn more about a DNs look at this document
fftokens is a protocol that can be implemented by token connector runtimes in order to be usable by the fftokens plugin in FireFly.
The connector runtime must expose an HTTP and websocket server, along with a minimum set of HTTP APIs and websocket events. Each connector will be strongly coupled to a specific ledger technology and token standard(s), but no assumptions are made in the fftokens spec about what these technologies must be, as long as they can satisfy the basic requirements laid out here.
Note that this is an internal protocol in the FireFly ecosystem - application developers working against FireFly should never need to care about or directly interact with a token connector runtime. The audience for this document is only developers interested in creating new token connectors (or editing/forking existing ones).
Two implementations of this specification have been created to date (both based on common Ethereum token standards) - firefly-tokens-erc1155 and firefly-tokens-erc20-erc721.
"},{"location":"reference/microservices/fftokens/#http-apis","title":"HTTP APIs","text":"This is the minimum set of APIs that must be implemented by a conforming token connector. A connector may choose to expose other APIs for its own purposes. All requests and responses to the APIs below are encoded as JSON. The APIs are currently understood to live under a /api/v1 prefix.
POST /createpool","text":"Create a new token pool. The exact meaning of this is flexible - it may mean invoking a contract or contract factory to actually define a new set of tokens via a blockchain transaction, or it may mean indexing a set of tokens that already exists (depending on the options a connector accepts in config).
In a multiparty network, this operation will only be performed by one of the parties, and FireFly will broadcast the result to the others.
FireFly will store a \"pending\" token pool after a successful creation, but will replace it with a \"confirmed\" token pool after a successful activation (see below).
Request
{\n \"type\": \"fungible\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"namespace\": \"default\",\n \"name\": \"FFCoin\",\n \"symbol\": \"FFC\",\n \"data\": \"pool-metadata\",\n \"requestId\": \"1\",\n \"config\": {}\n}\n Parameter Type Description type string enum The type of pool to create. Currently supported types are \"fungible\" and \"nonfungible\". It is recommended (but not required) that token connectors support both. Unrecognized/unsupported types should be rejected with HTTP 400. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. namespace string The namespace of the token pool name string (OPTIONAL) If supported by this token contract, this is a requested name for the token pool. May be ignored at the connector's discretion. symbol string (OPTIONAL) If supported by this token contract, this is a requested symbol for the token pool. May be ignored at the connector's discretion. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this creation request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the token pool is created. Response
HTTP 200: pool creation was successful, and the pool details are returned in the response.
See Response Types: Token Pool
HTTP 202: request was accepted, but pool will be created asynchronously, with \"receipt\" and \"token-pool\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-activatepool","title":"POST /activatepool","text":"Activate a token pool to begin receiving events. Generally this means the connector will create blockchain event listeners for transfer and approval events related to the set of tokens encompassed by this token pool.
In a multiparty network, this step will be performed by every member after a successful token pool broadcast. It therefore also serves the purpose of validating the broadcast info - if the connector does not find a valid pool given the poolLocator and config information passed in to this call, the pool should not get confirmed.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"config\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string (OPTIONAL) A data string that should be permanently attached to this pool and returned in all events. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. This should be the same config object that was passed when the pool was created. Response
HTTP 200: pool activation was successful, and the pool details are returned in the response.
See Response Types: Token Pool
HTTP 202: request was accepted, but pool will be activated asynchronously, with \"receipt\" and \"token-pool\" events sent later on the websocket.
See Response Types: Async Request
HTTP 204: activation was successful - no separate receipt will be delivered, but \"token-pool\" event will be sent later on the websocket.
No body
"},{"location":"reference/microservices/fftokens/#post-deactivatepool","title":"POST /deactivatepool","text":"Deactivate a token pool to stop receiving events and delete all blockchain listeners related to that pool.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"config\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string (OPTIONAL) The data string that was attached to this pool at activation. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Response
HTTP 204: deactivation was successful, and one or more listeners were deleted.
No body
HTTP 404: no blockchain listeners were found for the given pool information.
No body
"},{"location":"reference/microservices/fftokens/#post-checkinterface","title":"POST /checkinterface","text":"This is an optional (but recommended) API for token connectors. If implemented, support will be indicated by the presence of the interfaceFormat field in all Token Pool responses.
In the case that a connector supports multiple variants of a given token standard (such as many different ways to structure \"mint\" or \"burn\" calls on an underlying smart contract), this API allows the connector to be provided with a full description of the interface methods in use for a given token pool, so the connector can determine which methods it knows how to invoke.
Request
{\n \"poolLocator\": \"id=F1\",\n \"format\": \"abi\",\n \"methods\": [\n {\n \"name\": \"burn\",\n \"type\": \"function\",\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"tokenId\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\"\n },\n ...\n ]\n}\n Parameter Type Description poolLocator string The locator of the pool, as supplied by the output of the pool creation. format string enum The format of the data in this payload. Should match the interfaceFormat as supplied by the output of the pool creation. methods object array A list of all the methods available on the interface underpinning this token pool, encoded in the format specified by format. Response
HTTP 200: interface was successfully parsed, and methods of interest are returned in the body.
The response body includes a section for each type of token operation (burn/mint/transfer/approval), which specifies a subset of the input body useful to that operation. The caller (FireFly) can then store and provide the proper subset of the interface for every future token operation (via the interface parameter).
{\n \"burn\": {\n \"format\": \"abi\",\n \"methods\": [\n {\n \"name\": \"burn\",\n \"type\": \"function\",\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"tokenId\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\"\n }\n ]\n },\n \"mint\": { ... },\n \"transfer\": { ... },\n \"approval\": { ... }\n}\n"},{"location":"reference/microservices/fftokens/#post-mint","title":"POST /mint","text":"Mint new tokens.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"to\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"amount\": \"10\",\n \"tokenIndex\": \"1\",\n \"uri\": \"ipfs://000000\",\n \"requestId\": \"1\",\n \"data\": \"transfer-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. to string The identity to receive the minted tokens, in a format understood by this connector. amount number string The amount of tokens to mint. tokenIndex string (OPTIONAL) For non-fungible tokens that require choosing an index at mint time, the index of the specific token to mint. uri string (OPTIONAL) For non-fungible tokens that support choosing a URI at mint time, the URI to be attached to the token. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this mint request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the mint is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but mint will occur asynchronously, with \"receipt\" and \"token-mint\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-burn","title":"POST /burn","text":"Burn tokens.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"from\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"amount\": \"10\",\n \"tokenIndex\": \"1\",\n \"requestId\": \"1\",\n \"data\": \"transfer-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. from string The identity that currently owns the tokens to be burned, in a format understood by this connector. amount number string The amount of tokens to burn. tokenIndex string (OPTIONAL) For non-fungible tokens, the index of the specific token to burn. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this burn request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the burn is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but burn will occur asynchronously, with \"receipt\" and \"token-burn\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-transfer","title":"POST /transfer","text":"Transfer tokens from one address to another.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"from\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"to\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"amount\": \"1\",\n \"tokenIndex\": \"1\",\n \"requestId\": \"1\",\n \"data\": \"transfer-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. from string The identity to be used for the source of the transfer, in a format understood by this connector. to string The identity to be used for the destination of the transfer, in a format understood by this connector. amount number string The amount of tokens to transfer. tokenIndex string (OPTIONAL) For non-fungible tokens, the index of the specific token to transfer. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this transfer request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the transfer is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but transfer will occur asynchronously, with \"receipt\" and \"token-transfer\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#post-approval","title":"POST /approval","text":"Approve another identity to manage tokens.
Request
{\n \"namespace\": \"default\",\n \"poolLocator\": \"id=F1\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"operator\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"approved\": true,\n \"requestId\": \"1\",\n \"data\": \"approval-metadata\",\n \"config\": {},\n \"interface\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool poolLocator string The locator of the pool, as supplied by the output of the pool creation. signer string The signing identity to be used for the blockchain transaction, in a format understood by this connector. operator string The identity to be approved (or unapproved) for managing the signer's tokens. approved boolean Whether to approve (the default) or unapprove. requestId string (OPTIONAL) A unique identifier for this request. Will be included in the \"receipt\" websocket event to match receipts to requests. data string (OPTIONAL) A data string that should be returned in the connector's response to this approval request. config object (OPTIONAL) An arbitrary JSON object where the connector may accept additional parameters if desired. Each connector may define its own valid options to influence how the approval is carried out. interface object (OPTIONAL) Details on interface methods that are useful to this operation, as negotiated previously by a /checkinterface call. Response
HTTP 202: request was accepted, but approval will occur asynchronously, with \"receipt\" and \"token-approval\" events sent later on the websocket.
See Response Types: Async Request
"},{"location":"reference/microservices/fftokens/#websocket-commands","title":"Websocket Commands","text":"In order to start listening for events on a certain namespace, the client needs to send the start command. Clients should send this command every time they connect, or after an automatic reconnect.
{\n \"type\": \"start\",\n \"namespace\": \"default\"\n}\n"},{"location":"reference/microservices/fftokens/#websocket-events","title":"Websocket Events","text":"A connector should expose a websocket at /api/ws. All emitted websocket events are a JSON string of the form:
{\n \"id\": \"event-id\",\n \"event\": \"event-name\",\n \"data\": {}\n}\n The event name will match one of the names listed below, and the data payload will correspond to the linked response object.
All events except the receipt event must be acknowledged by sending an ack of the form:
{\n \"event\": \"ack\",\n \"data\": {\n \"id\": \"event-id\"\n }\n}\n Many messages may also be batched into a single websocket event of the form:
{\n \"id\": \"event-id\",\n \"event\": \"batch\",\n \"data\": {\n \"events\": [\n {\n \"event\": \"event-name\",\n \"data\": {}\n },\n ...\n ]\n }\n}\n Batched messages must be acked all at once using the ID of the batch.
"},{"location":"reference/microservices/fftokens/#receipt","title":"receipt","text":"An asynchronous operation has completed.
See Response Types: Receipt
"},{"location":"reference/microservices/fftokens/#token-pool","title":"token-pool","text":"A new token pool has been created or activated.
See Response Types: Token Pool
"},{"location":"reference/microservices/fftokens/#token-mint","title":"token-mint","text":"Tokens have been minted.
See Response Types: Token Transfer
"},{"location":"reference/microservices/fftokens/#token-burn","title":"token-burn","text":"Tokens have been burned.
See Response Types: Token Transfer
"},{"location":"reference/microservices/fftokens/#token-transfer","title":"token-transfer","text":"Tokens have been transferred.
See Response Types: Token Transfer
"},{"location":"reference/microservices/fftokens/#token-approval","title":"token-approval","text":"Token approvals have changed.
See Response Types: Token Approval
"},{"location":"reference/microservices/fftokens/#response-types","title":"Response Types","text":""},{"location":"reference/microservices/fftokens/#async-request","title":"Async Request","text":"Many operations may happen asynchronously in the background, and will return only a request ID. This may be a request ID that was passed in, or if none was passed, will be randomly assigned. This ID can be used to correlate with a receipt event later received on the websocket.
{\n \"id\": \"b84ab27d-0d50-42a6-9c26-2fda5eb901ba\"\n}\n"},{"location":"reference/microservices/fftokens/#receipt_1","title":"Receipt","text":" \"headers\": {\n \"type\": \"\",\n \"requestId\": \"\"\n }\n \"transactionHash\": \"\",\n \"errorMessage\": \"\"\n}\n Parameter Type Description headers.type string enum The type of this response. Should be \"TransactionSuccess\", \"TransactionUpdate\", or \"TransactionFailed\". headers.requestId string The ID of the request to which this receipt should correlate. transactionHash string The unique identifier for the blockchain transaction which generated this receipt. errorMessage string (OPTIONAL) If this is a failure, contains details on the reason for the failure."},{"location":"reference/microservices/fftokens/#token-pool_1","title":"Token Pool","text":"{\n \"namespace\": \"default\",\n \"type\": \"fungible\",\n \"data\": \"pool-metadata\",\n \"poolLocator\": \"id=F1\",\n \"standard\": \"ERC20\",\n \"interfaceFormat\": \"abi\",\n \"symbol\": \"FFC\",\n \"decimals\": 18,\n \"info\": {},\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"blockchain\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool type string enum The type of pool that was created. data string A copy of the data that was passed in on the creation request. poolLocator string A string to identify this pool, generated by the connector. Must be unique for each pool created by this connector. Will be passed back on all operations within this pool, and may be packed with relevant data about the pool for later usage (such as the address and type of the pool). standard string (OPTIONAL) The name of a well-defined token standard to which this pool conforms. interfaceFormat string enum (OPTIONAL) If this connector supports the /checkinterface API, this is the interface format to be used for describing the interface underpinning this pool. Must be \"abi\" or \"ffi\". symbol string (OPTIONAL) The symbol for this token pool, if applicable. decimals number (OPTIONAL) The number of decimals used for balances in this token pool, if applicable. info object (OPTIONAL) Additional information about the pool. Each connector may define the format for this object. signer string (OPTIONAL) If this operation triggered a blockchain transaction, the signing identity used for the transaction. blockchain object (OPTIONAL) If this operation triggered a blockchain transaction, contains details on the blockchain event in FireFly's standard blockchain event format."},{"location":"reference/microservices/fftokens/#token-transfer_1","title":"Token Transfer","text":"Note that mint and burn operations are just specialized versions of transfer. A mint will omit the \"from\" field, while a burn will omit the \"to\" field.
{\n \"namespace\": \"default\",\n \"id\": \"1\",\n \"data\": \"transfer-metadata\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"from\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"to\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"amount\": \"1\",\n \"tokenIndex\": \"1\",\n \"uri\": \"ipfs://000000\",\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"blockchain\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool id string An identifier for this transfer. Must be unique for every transfer within this pool. data string A copy of the data that was passed in on the mint/burn/transfer request. May be omitted if the token contract does not support a method of attaching extra data (will result in reduced ability for FireFly to correlate the inputs and outputs of the transaction). poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string The extra data associated with the pool at pool activation. from string The identity used for the source of the transfer. to string The identity used for the destination of the transfer. amount number string The amount of tokens transferred. tokenIndex string (OPTIONAL) For non-fungible tokens, the index of the specific token transferred. uri string (OPTIONAL) For non-fungible tokens, the URI attached to the token. signer string (OPTIONAL) If this operation triggered a blockchain transaction, the signing identity used for the transaction. blockchain object (OPTIONAL) If this operation triggered a blockchain transaction, contains details on the blockchain event in FireFly's standard blockchain event format."},{"location":"reference/microservices/fftokens/#token-approval_1","title":"Token Approval","text":"{\n \"namespace\": \"default\",\n \"id\": \"1\",\n \"data\": \"transfer-metadata\",\n \"poolLocator\": \"id=F1\",\n \"poolData\": \"extra-pool-info\",\n \"operator\": \"0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"approved\": true,\n \"subject\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A:0xb107ed9caa1323b7bc36e81995a4658ec2251951\",\n \"info\": {},\n \"signer\": \"0x0Ef1D0Dd56a8FB1226C0EaC374000B81D6c8304A\",\n \"blockchain\": {}\n}\n Parameter Type Description namespace string The namespace of the token pool id string An identifier for this approval. Must be unique for every approval within this pool. data string A copy of the data that was passed in on the approval request. May be omitted if the token contract does not support a method of attaching extra data (will result in reduced ability for FireFly to correlate the inputs and outputs of the transaction). poolLocator string The locator of the pool, as supplied by the output of the pool creation. poolData string The extra data associated with the pool at pool activation. operator string The identity that was approved (or unapproved) for managing tokens. approved boolean Whether this was an approval or unapproval. subject string A string identifying the scope of the approval, generated by the connector. Approvals with the same subject are understood replace one another, so that a previously-recorded approval becomes inactive. This string may be a combination of the identities involved, the token index, etc. info object (OPTIONAL) Additional information about the approval. Each connector may define the format for this object. signer string (OPTIONAL) If this operation triggered a blockchain transaction, the signing identity used for the transaction. blockchain object (OPTIONAL) If this operation triggered a blockchain transaction, contains details on the blockchain event in FireFly's standard blockchain event format."},{"location":"reference/types/batch/","title":"Batch","text":"A batch bundles a number of off-chain messages, with associated data, into a single payload for broadcast or private transfer.
This allows the transfer of many messages (hundreds) to be backed by a single blockchain transaction. Thus making very efficient use of the blockchain.
The same benefit also applies to the off-chain transport mechanism.
Shared storage operations benefit from the same optimization. In IPFS for example chunks are 256Kb in size, so there is a great throughput benefit in packaging many small messages into a single large payload.
For a data exchange transport, there is often cryptography and transport overhead for each individual transport level send between participants. This is particularly true if using a data exchange transport with end-to-end payload encryption, using public/private key cryptography for the envelope.
"},{"location":"reference/types/batch/#example","title":"Example","text":"{\n \"id\": \"894bc0ea-0c2e-4ca4-bbca-b4c39a816bbb\",\n \"type\": \"private\",\n \"namespace\": \"ns1\",\n \"node\": \"5802ab80-fa71-4f52-9189-fb534de93756\",\n \"group\": \"cd1fedb69fb83ad5c0c62f2f5d0b04c59d2e41740916e6815a8e063b337bd32e\",\n \"created\": \"2022-05-16T01:23:16Z\",\n \"author\": \"did:firefly:org/example\",\n \"key\": \"0x0a989907dcd17272257f3ebcf72f4351df65a846\",\n \"hash\": \"78d6861f860c8724468c9254b99dc09e7d9fd2d43f26f7bd40ecc9ee47be384d\",\n \"payload\": {\n \"tx\": {\n \"type\": \"private\",\n \"id\": \"04930d84-0227-4044-9d6d-82c2952a0108\"\n },\n \"messages\": [],\n \"data\": []\n }\n}\n"},{"location":"reference/types/batch/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the batch UUID type The type of the batch FFEnum:\"broadcast\"\"private\" namespace The namespace of the batch string node The UUID of the node that generated the batch UUID group The privacy group the batch is sent to, for private batches Bytes32 created The time the batch was sealed FFTime author The DID of identity of the submitter string key The on-chain signing key used to sign the transaction string hash The hash of the manifest of the batch Bytes32 payload Batch.payload BatchPayload"},{"location":"reference/types/batch/#batchpayload","title":"BatchPayload","text":"Field Name Description Type tx BatchPayload.tx TransactionRef messages BatchPayload.messages Message[] data BatchPayload.data Data[]"},{"location":"reference/types/batch/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/blockchainevent/","title":"BlockchainEvent","text":"Blockchain Events are detected by the blockchain plugin:
Each Blockchain Event (once final) exists in an absolute location somewhere in the transaction history of the blockchain. A particular slot, in a particular block.
How to describe that position contains blockchain specifics - depending on how a particular blockchain represents transactions, blocks and events (or \"logs\").
So FireFly is flexible with a string protocolId in the core object to represent this location, and then there is a convention that is adopted by the blockchain plugins to try and create some consistency.
An example protocolId string is: 000000000041/000020/000003
000000000041 - this is the block number000020 - this is the transaction index within that block000003 - this is the event (/log) index within that blockThe string is alphanumerically sortable as a plain string;
Sufficient zero padding is included at each layer to support future expansion without creating a string that would no longer sort correctly.
"},{"location":"reference/types/blockchainevent/#example","title":"Example","text":"{\n \"id\": \"e9bc4735-a332-4071-9975-b1066e51ab8b\",\n \"source\": \"ethereum\",\n \"namespace\": \"ns1\",\n \"name\": \"MyEvent\",\n \"listener\": \"c29b4595-03c2-411a-89e3-8b7f27ef17bb\",\n \"protocolId\": \"000000000048/000000/000000\",\n \"output\": {\n \"addr1\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad\",\n \"value2\": \"42\"\n },\n \"info\": {\n \"address\": \"0x57A9bE18CCB50D06B7567012AaF6031D669BBcAA\",\n \"blockHash\": \"0xae7382ef2573553f517913b927d8b9691ada8d617266b8b16f74bb37aa78cae8\",\n \"blockNumber\": \"48\",\n \"logIndex\": \"0\",\n \"signature\": \"Changed(address,uint256)\",\n \"subId\": \"sb-e4d5efcd-2eba-4ed1-43e8-24831353fffc\",\n \"timestamp\": \"1653048837\",\n \"transactionHash\": \"0x34b0327567fefed09ac7b4429549bc609302b08a9cbd8f019a078ec44447593d\",\n \"transactionIndex\": \"0x0\"\n },\n \"timestamp\": \"2022-05-16T01:23:15Z\",\n \"tx\": {\n \"blockchainId\": \"0x34b0327567fefed09ac7b4429549bc609302b08a9cbd8f019a078ec44447593d\"\n }\n}\n"},{"location":"reference/types/blockchainevent/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID assigned to the event by FireFly UUID source The blockchain plugin or token service that detected the event string namespace The namespace of the listener that detected this blockchain event string name The name of the event in the blockchain smart contract string listener The UUID of the listener that detected this event, or nil for built-in events in the system namespace UUID protocolId An alphanumerically sortable string that represents this event uniquely on the blockchain (convention for plugins is zero-padded values BLOCKNUMBER/TXN_INDEX/EVENT_INDEX) string output The data output by the event, parsed to JSON according to the interface of the smart contract JSONObject info Detailed blockchain specific information about the event, as generated by the blockchain connector JSONObject timestamp The time allocated to this event by the blockchain. This is the block timestamp for most blockchain connectors FFTime tx If this blockchain event is coorelated to FireFly transaction such as a FireFly submitted token transfer, this field is set to the UUID of the FireFly transaction BlockchainTransactionRef"},{"location":"reference/types/blockchainevent/#blockchaintransactionref","title":"BlockchainTransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID blockchainId The blockchain transaction ID, in the format specific to the blockchain involved in the transaction. Not all FireFly transactions include a blockchain string"},{"location":"reference/types/contractapi/","title":"ContractAPI","text":"Contract APIs provide generated REST APIs for on-chain smart contracts.
API endpoints are generated to invoke or perform query operations against each of the functions/methods implemented by the smart contract.
API endpoints are also provided to add listeners to the events of that smart contract.
Note that once you have established listeners for your blockchain events into FireFly, you need to also subscribe in your application to receive the FireFly events (of type blockchain_event_received) that are emitted for each detected blockchain event.
For more information see the Events reference section.
"},{"location":"reference/types/contractapi/#url","title":"URL","text":"The base path for your Contract API is:
/api/v1/namespaces/{ns}/apis/{apiName}For the default namespace, this can be shortened to:
/api/v1/apis/{apiName}Contract APIs are registered against:
A FireFly Interface (FFI) definition, which defines in a blockchain agnostic format the list of functions/events supported by the smart contract. Also detailed type information about the inputs/outputs to those functions/events.
An optional location configured on the Contract API describes where the instance of the smart contract the API should interact with exists in the blockchain layer. For example the address of the Smart Contract for an Ethereum based blockchain, or the name and channel for a Hyperledger Fabric based blockchain.
If the location is not specified on creation of the Contract API, then it must be specified on each API call made to the Contract API endpoints.
Each Contract API comes with an OpenAPI V3 / Swagger generated definition, which can be downloaded from:
/api/v1/namespaces/{namespaces}/apis/{apiName}/api/swagger.jsonA browser / exerciser UI for your API is also available on:
/api/v1/namespaces/{namespaces}/apis/{apiName}/api{\n \"id\": \"0f12317b-85a0-4a77-a722-857ea2b0a5fa\",\n \"namespace\": \"ns1\",\n \"interface\": {\n \"id\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\"\n },\n \"location\": {\n \"address\": \"0x95a6c4895c7806499ba35f75069198f45e88fc69\"\n },\n \"name\": \"my_contract_api\",\n \"message\": \"b09d9f77-7b16-4760-a8d7-0e3c319b2a16\",\n \"urls\": {\n \"api\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/my_contract_api\",\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/my_contract_api/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/my_contract_api/api\"\n },\n \"published\": false\n}\n"},{"location":"reference/types/contractapi/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the contract API UUID namespace The namespace of the contract API string interface Reference to the FireFly Interface definition associated with the contract API FFIReference location If this API is tied to an individual instance of a smart contract, this field can include a blockchain specific contract identifier. For example an Ethereum contract address, or a Fabric chaincode name and channel JSONAny name The name that is used in the URL to access the API string networkName The published name of the API within the multiparty network string message The UUID of the broadcast message that was used to publish this API to the network UUID urls The URLs to use to access the API ContractURLs published Indicates if the API is published to other members of the multiparty network bool"},{"location":"reference/types/contractapi/#ffireference","title":"FFIReference","text":"Field Name Description Type id The UUID of the FireFly interface UUID name The name of the FireFly interface string version The version of the FireFly interface string"},{"location":"reference/types/contractapi/#contracturls","title":"ContractURLs","text":"Field Name Description Type api The URL to use to invoke the API string openapi The URL to download the OpenAPI v3 (Swagger) description for the API generated in JSON or YAML format string ui The URL to use in a web browser to access the SwaggerUI explorer/exerciser for the API string"},{"location":"reference/types/contractlistener/","title":"ContractListener","text":"A contract listener configures FireFly to stream events from the blockchain, from a specific location on the blockchain, according to a given definition of the interface for that event.
Check out the Custom Contracts Tutorial for a walk-through of how to set up listeners for the events from your smart contracts.
See below for a deep dive into the format of contract listeners and important concepts to understand when managing them.
"},{"location":"reference/types/contractlistener/#event-filters","title":"Event filters","text":""},{"location":"reference/types/contractlistener/#multiple-filters","title":"Multiple filters","text":"From v1.3.1 onwards, a contract listener can be created with multiple filters under a single topic, when supported by the connector. Each filter contains:
In addition to this list of multiple filters, the listener specifies a single topic to identify the stream of events.
Creating a single listener that listens for multiple events will allow for the easiest management of listeners, and for strong ordering of the events that they process.
"},{"location":"reference/types/contractlistener/#single-filter","title":"Single filter","text":"Before v1.3.1, each contract listener would only support listening to one specific event from a contract interface. Each listener would be comprised of:
topic which determines the ordered stream that these events are part ofFor backwards compatibility, this format is still supported by the API.
"},{"location":"reference/types/contractlistener/#signature-strings","title":"Signature strings","text":""},{"location":"reference/types/contractlistener/#string-format","title":"String format","text":"Each filter is identified by a generated signature that matches a single event, and each contract listener is identified by a signature computed from its filters.
Ethereum provides a string standard for event signatures, of the form EventName(uint256,bytes). Prior to v1.3.1, the signature of each Ethereum contract listener would exactly follow this Ethereum format.
As of v1.3.1, Ethereum format signature strings have been changed in FireFly, because this format does not fully describe the event - particularly because each top-level parameter can in the ABI definition be marked as indexed. For example, while the following two Solidity events have the same signature, they are serialized differently due to the different placement of indexed parameters, and thus a listener must define both individually to be able to process them:
Transferevent Transfer(address indexed _from, address indexed _to, uint256 _value)\n Transferevent Transfer(address indexed _from, address indexed _to, uint256 indexed _tokenId);\n The two above are now expressed in the following manner by the FireFly Ethereum blockchain connector:
Transfer(address,address,uint256) [i=0,1]\nTransfer(address,address,uint256) [i=0,1,2]\n The [i=] listing at the end of the signature indicates the position of all parameters that are marked as indexed.
Building on the blockchain-specific signature format for each event, FireFly will then compute the final signature for each filter and each contract listener as follows:
0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1:Transfer(address,address,uint256) [i=0,1];FireFly restricts the creation of a contract listener containing duplicate filters.
This includes the special case where one filter is a superset of another filter, due to a wildcard location.
For example, if two filters are listening to the same event, but one has specified a location and the other hasn't, then the latter will be a superset, and already be listening to all the events matching the first filter. Creation of duplicate or superset filters within a single listener will be blocked.
"},{"location":"reference/types/contractlistener/#duplicate-listeners","title":"Duplicate listeners","text":"As noted above, each listener has a generated signature. This signature - containing all the locations and event signatures combined with the listener topic - will guarantee uniqueness of the contract listener. If you tried to create the same listener again, you would receive HTTP 409. This combination can allow a developer to assert that their listener exists, without the risk of creating duplicates.
Note: Prior to v1.3.1, FireFly would detect duplicates simply by requiring a unique combination of signature + topic + location for each listener. The updated behavior for the listener signature is intended to preserve similar functionality, even when dealing with listeners that contain many event filters.
"},{"location":"reference/types/contractlistener/#backwards-compatibility","title":"Backwards compatibility","text":"As noted throughout this document, the behavior of listeners has changed in v1.3.1. However, the following behaviors are retained for backwards-compatibility, to ensure that code written prior to v1.3.1 should continue to function.
listeners will continue to populate top-level event and location fieldsfilters array is duplicated to these fieldslistener, the event and location fields are still supportedfilters array with a single entrysignature field is preserved at the listener levelThe two input formats supported when creating a contract listener are shown below.
"},{"location":"reference/types/contractlistener/#with-event-definition","title":"With event definition","text":"In these examples, the event schema in the FireFly Interface format is provided describing the event and its parameters. See FireFly Interface Format
Muliple Filters
{\n \"filters\": [\n {\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n }\n },\n {\n \"event\": {\n \"name\": \"AnotherEvent\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"my-field\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa4ea5d0b6b2eaf194716f0cc73981939dca27da1\"\n }\n }\n ],\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n One filter (old format)
{\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n"},{"location":"reference/types/contractlistener/#with-interface-reference","title":"With interface reference","text":"These examples use an interface reference when creating the filters, the eventPath field is used to reference an event defined within the interface provided. In this case, we do not need to provide the event schema as the section above shows. See an example of creating a FireFly Interface for an EVM smart contract.
Muliple Filters
{\n \"filters\": [\n {\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"eventPath\": \"Changed\"\n },\n {\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa4ea5d0b6b2eaf194716f0cc73981939dca27da1\"\n },\n \"eventPath\": \"AnotherEvent\"\n }\n ],\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n One filter (old format)
{\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"eventPath\": \"Changed\",\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n"},{"location":"reference/types/contractlistener/#example","title":"Example","text":"{\n \"id\": \"d61980a9-748c-4c72-baf5-8b485b514d59\",\n \"interface\": {\n \"id\": \"ff1da3c1-f9e7-40c2-8d93-abb8855e8a1d\"\n },\n \"namespace\": \"ns1\",\n \"name\": \"contract1_events\",\n \"backendId\": \"sb-dd8795fc-a004-4554-669d-c0cf1ee2c279\",\n \"location\": {\n \"address\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf\"\n },\n \"created\": \"2022-05-16T01:23:15Z\",\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"signature\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf:Changed(uint256)\",\n \"topic\": \"app1_topic\",\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"filters\": [\n {\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf\"\n },\n \"signature\": \"0x596003a91a97757ef1916c8d6c0d42592630d2cf:Changed(uint256)\"\n }\n ]\n}\n"},{"location":"reference/types/contractlistener/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the smart contract listener UUID interface Deprecated: Please use 'interface' in the array of 'filters' instead FFIReference namespace The namespace of the listener, which defines the namespace of all blockchain events detected by this listener string name A descriptive name for the listener string backendId An ID assigned by the blockchain connector to this listener string location Deprecated: Please use 'location' in the array of 'filters' instead JSONAny created The creation time of the listener FFTime event Deprecated: Please use 'event' in the array of 'filters' instead FFISerializedEvent signature A concatenation of all the stringified signature of the event and location, as computed by the blockchain plugin string topic A topic to set on the FireFly event that is emitted each time a blockchain event is detected from the blockchain. Setting this topic on a number of listeners allows applications to easily subscribe to all events they need string options Options that control how the listener subscribes to events from the underlying blockchain ContractListenerOptions filters A list of filters for the contract listener. Each filter is made up of an Event and an optional Location. Events matching these filters will always be emitted in the order determined by the blockchain. ListenerFilter[]"},{"location":"reference/types/contractlistener/#ffireference","title":"FFIReference","text":"Field Name Description Type id The UUID of the FireFly interface UUID name The name of the FireFly interface string version The version of the FireFly interface string"},{"location":"reference/types/contractlistener/#ffiserializedevent","title":"FFISerializedEvent","text":"Field Name Description Type name The name of the event string description A description of the smart contract event string params An array of event parameter/argument definitions FFIParam[] details Additional blockchain specific fields about this event from the original smart contract. Used by the blockchain plugin and for documentation generation. JSONObject"},{"location":"reference/types/contractlistener/#ffiparam","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/contractlistener/#contractlisteneroptions","title":"ContractListenerOptions","text":"Field Name Description Type firstEvent A blockchain specific string, such as a block number, to start listening from. The special strings 'oldest' and 'newest' are supported by all blockchain connectors. Default is 'newest' string"},{"location":"reference/types/contractlistener/#listenerfilter","title":"ListenerFilter","text":"Field Name Description Type event The definition of the event, either provided in-line when creating the listener, or extracted from the referenced FFI when supplied FFISerializedEvent location A blockchain specific contract identifier. For example an Ethereum contract address, or a Fabric chaincode name and channel JSONAny interface A reference to an existing FFI, containing pre-registered type information for the event, used in combination with eventPath FFIReference signature The stringified signature of the event and location, as computed by the blockchain plugin string"},{"location":"reference/types/data/","title":"Data","text":"Data is a uniquely identified piece of data available for retrieval or transfer.
Multiple data items can be attached to a message when sending data off-chain to another party in a multi-party system. Note that if you pass data in-line when sending a message, those data elements will be stored separately to the message and available to retrieve separately later.
An UUID is allocated to each data resource.
A hash is also calculated as follows:
value serialized as JSON with no additional whitespace (order of the keys is retained from the original upload order).blob attachment, the hash is of the blob data.blob and a value, then the hash is a hash of the concatenation of a hash of the value and a hash of the blob.Each data resource can contain a value, which is any JSON type. String, number, boolean, array or object. This value is stored directly in the FireFly database.
If the value you are storing is not JSON data, but is small enough you want it to be stored in the core database, then use a JSON string to store an encoded form of your data (such as XML, CSV etc.).
"},{"location":"reference/types/data/#datatype-validation-of-agreed-data-types","title":"Datatype - validation of agreed data types","text":"A datatype can be associated with your data, causing FireFly to verify the value against a schema before accepting it (on upload, or receipt from another party in the network).
These datatypes are pre-established via broadcast messages, and support versioning. Use this system to enforce a set of common data types for exchange of data across your business network, and reduce the overhead of data verification\\ required in the application/integration tier.
More information in the Datatype section
"},{"location":"reference/types/data/#blob-binary-data-stored-via-the-data-exchange","title":"Blob - binary data stored via the Data Exchange","text":"Data resources can also contain a blob attachment, which is stored via the Data Exchange plugin outside of the FireFly core database. This is intended for large data payloads, which might be structured or unstructured. PDF documents, multi-MB XML payloads, CSV data exports, JPEG images video files etc.
A Data resource can contain both a value JSON payload, and a blob attachment, meaning that you bind a set of metadata to a binary payload. For example a set of extracted metadata from OCR processing of a PDF document.
One special case is a filename for a document. This pattern is so common for file/document management scenarios, that special handling is provided for it. If a JSON object is stored in value, and it has a property called name, then this value forms part of the data hash (as does every field in the value) and is stored in a separately indexed blob.name field.
The upload REST API provides an autometa form field, which can be set to ask FireFly core to automatically set the value to contain the filename, size, and MIME type from the file upload.
{\n \"id\": \"4f11e022-01f4-4c3f-909f-5226947d9ef0\",\n \"validator\": \"json\",\n \"namespace\": \"ns1\",\n \"hash\": \"5e2758423c99b799f53d3f04f587f5716c1ff19f1d1a050f40e02ea66860b491\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"datatype\": {\n \"name\": \"widget\",\n \"version\": \"v1.2.3\"\n },\n \"value\": {\n \"name\": \"filename.pdf\",\n \"a\": \"example\",\n \"b\": {\n \"c\": 12345\n }\n },\n \"blob\": {\n \"hash\": \"cef238f7b02803a799f040cdabe285ad5cd6db4a15cb9e2a1000f2860884c7ad\",\n \"size\": 12345,\n \"name\": \"filename.pdf\"\n }\n}\n"},{"location":"reference/types/data/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the data resource UUID validator The data validator type FFEnum: namespace The namespace of the data resource string hash The hash of the data resource. Derived from the value and the hash of any binary blob attachment Bytes32 created The creation time of the data resource FFTime datatype The optional datatype to use of validation of this data DatatypeRef value The value for the data, stored in the FireFly core database. Can be any JSON type - object, array, string, number or boolean. Can be combined with a binary blob attachment JSONAny public If the JSON value has been published to shared storage, this field is the id of the data in the shared storage plugin (IPFS hash etc.) string blob An optional hash reference to a binary blob attachment BlobRef"},{"location":"reference/types/data/#datatyperef","title":"DatatypeRef","text":"Field Name Description Type name The name of the datatype string version The version of the datatype. Semantic versioning is encouraged, such as v1.0.1 string"},{"location":"reference/types/data/#blobref","title":"BlobRef","text":"Field Name Description Type hash The hash of the binary blob data Bytes32 size The size of the binary data int64 name The name field from the metadata attached to the blob, commonly used as a path/filename, and indexed for search string path If a name is specified, this field stores the '/' prefixed and separated path extracted from the full name string public If the blob data has been published to shared storage, this field is the id of the data in the shared storage plugin (IPFS hash etc.) string"},{"location":"reference/types/datatype/","title":"Datatype","text":"A datatype defines the format of some data that can be shared between parties, in a way that FireFly can enforce consistency of that data against the schema.
Data that does not match the schema associated with it will not be accepted on upload to FireFly, and if this were bypassed by a participant in some way it would be rejected by all parties and result in a message_rejected event (rather than message_confirmed event).
Currently JSON Schema validation of data is supported.
The system for defining datatypes is pluggable, to support other schemes in the future, such as XML Schema, or CSV, EDI etc.
"},{"location":"reference/types/datatype/#example","title":"Example","text":"{\n \"id\": \"3a479f7e-ddda-4bda-aa24-56d06c0bf08e\",\n \"message\": \"bfcf904c-bdf7-40aa-bbd7-567f625c26c0\",\n \"validator\": \"json\",\n \"namespace\": \"ns1\",\n \"name\": \"widget\",\n \"version\": \"1.0.0\",\n \"hash\": \"639cd98c893fa45a9df6fd87bd0393a9b39e31e26fbb1eeefe90cb40c3fa02d2\",\n \"created\": \"2022-05-16T01:23:16Z\",\n \"value\": {\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n },\n \"additionalProperties\": false\n }\n}\n"},{"location":"reference/types/datatype/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the datatype UUID message The UUID of the broadcast message that was used to publish this datatype to the network UUID validator The validator that should be used to verify this datatype FFEnum:\"json\"\"none\"\"definition\" namespace The namespace of the datatype. Data resources can only be created referencing datatypes in the same namespace string name The name of the datatype string version The version of the datatype. Multiple versions can exist with the same name. Use of semantic versioning is encourages, such as v1.0.1 string hash The hash of the value, such as the JSON schema. Allows all parties to be confident they have the exact same rules for verifying data created against a datatype Bytes32 created The time the datatype was created FFTime value The definition of the datatype, in the syntax supported by the validator (such as a JSON Schema definition) JSONAny"},{"location":"reference/types/event/","title":"Event","text":"Every Event emitted by FireFly shares a common structure.
See Events for a reference for how the overall event bus in Hyperledger FireFly operates, and descriptions of all the sub-categories of events.
"},{"location":"reference/types/event/#sequence","title":"Sequence","text":"A local sequence number is assigned to each event, and you can use an API to query events using this sequence number in exactly the same order that they are delivered to your application.
Events have a reference to the UUID of an object that is the subject of the event, such as a detailed Blockchain Event, or an off-chain Message.
When events are delivered to your application, the reference field is automatically retrieved and included in the JSON payload that is delivered to your application.
You can use the ?fetchreferences query parameter on API calls to request the same in-line JSON payload be included in query results.
The type of the reference also determines what subscription filters apply when performing server-side filters.
Here is the mapping between event types, and the object that you find in the reference field.
For some event types, there is a secondary reference to an object that is associated with the event. This is set in a correlator field on the Event, but is not automatically fetched. This field is primarily used for the confirm option on API calls to allow FireFly to determine when a request has succeeded/failed.
Events have a topic, and how that topic is determined is specific to the type of event. This is intended to be a property you would use to filter events to your application, or query all historical events associated with a given business data stream.
For example when you send a Message, you set the topics you want that message to apply to, and FireFly ensures a consistent global order between all parties that receive that message.
When actions are submitted by a FireFly node, they are performed within a FireFly Transaction. The events that occur as a direct result of that transaction, are tagged with the transaction ID so that they can be grouped together.
This construct is a distinct higher level construct than a Blockchain transaction, that groups together a number of operations/events that might be on-chain or off-chain. In some cases, such as unpinned off-chain data transfer, a FireFly transaction can exist when there is no blockchain transaction at all. Wherever possible you will find that FireFly tags the FireFly transaction with any associated Blockchain transaction(s).
Note that some events cannot be tagged with a Transaction ID:
data payload in the event they emitted)transaction_submitted Transaction transaction.type message_confirmedmessage_rejected Message message.header.topics[i]* message.header.cid token_pool_confirmed TokenPool tokenPool.id token_pool_op_failed Operation tokenPool.id tokenPool.id token_transfer_confirmed TokenTransfer tokenPool.id token_transfer_op_failed Operation tokenPool.id tokenTransfer.localId token_approval_confirmed TokenApproval tokenPool.id token_approval_op_failed Operation tokenPool.id tokenApproval.localId namespace_confirmed Namespace \"ff_definition\" datatype_confirmed Datatype \"ff_definition\" identity_confirmedidentity_updated Identity \"ff_definition\" contract_interface_confirmed FFI \"ff_definition\" contract_api_confirmed ContractAPI \"ff_definition\" blockchain_event_received BlockchainEvent From listener ** blockchain_invoke_op_succeeded Operation blockchain_invoke_op_failed Operation blockchain_contract_deploy_op_succeeded Operation blockchain_contract_deploy_op_failed Operation ** The topic for a blockchain event is inherited from the blockchain listener, allowing you to create multiple blockchain listeners that all deliver messages to your application on a single FireFly topic.
"},{"location":"reference/types/event/#example","title":"Example","text":"{\n \"id\": \"5f875824-b36b-4559-9791-a57a2e2b30dd\",\n \"sequence\": 168,\n \"type\": \"transaction_submitted\",\n \"namespace\": \"ns1\",\n \"reference\": \"0d12aa75-5ed8-48a7-8b54-45274c6edcb1\",\n \"tx\": \"0d12aa75-5ed8-48a7-8b54-45274c6edcb1\",\n \"topic\": \"batch_pin\",\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/event/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID assigned to this event by your local FireFly node UUID sequence A sequence indicating the order in which events are delivered to your application. Assure to be unique per event in your local FireFly database (unlike the created timestamp) int64 type All interesting activity in FireFly is emitted as a FireFly event, of a given type. The 'type' combined with the 'reference' can be used to determine how to process the event within your application FFEnum:\"transaction_submitted\"\"message_confirmed\"\"message_rejected\"\"datatype_confirmed\"\"identity_confirmed\"\"identity_updated\"\"token_pool_confirmed\"\"token_pool_op_failed\"\"token_transfer_confirmed\"\"token_transfer_op_failed\"\"token_approval_confirmed\"\"token_approval_op_failed\"\"contract_interface_confirmed\"\"contract_api_confirmed\"\"blockchain_event_received\"\"blockchain_invoke_op_succeeded\"\"blockchain_invoke_op_failed\"\"blockchain_contract_deploy_op_succeeded\"\"blockchain_contract_deploy_op_failed\" namespace The namespace of the event. Your application must subscribe to events within a namespace string reference The UUID of an resource that is the subject of this event. The event type determines what type of resource is referenced, and whether this field might be unset UUID correlator For message events, this is the 'header.cid' field from the referenced message. For certain other event types, a secondary object is referenced such as a token pool UUID tx The UUID of a transaction that is event is part of. Not all events are part of a transaction UUID topic A stream of information this event relates to. For message confirmation events, a separate event is emitted for each topic in the message. For blockchain events, the listener specifies the topic. Rules exist for how the topic is set for other event types string created The time the event was emitted. Not guaranteed to be unique, or to increase between events in the same order as the final sequence events are delivered to your application. As such, the 'sequence' field should be used instead of the 'created' field for querying events in the exact order they are delivered to applications FFTime"},{"location":"reference/types/ffi/","title":"FFI","text":"See FireFly Interface Format
"},{"location":"reference/types/ffi/#example","title":"Example","text":"{\n \"id\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"message\": \"e4ad2077-5714-416e-81f9-7964a6223b6f\",\n \"namespace\": \"ns1\",\n \"name\": \"SimpleStorage\",\n \"description\": \"A simple example contract in Solidity\",\n \"version\": \"v0.0.1\",\n \"methods\": [\n {\n \"id\": \"8f3289dd-3a19-4a9f-aab3-cb05289b013c\",\n \"interface\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"name\": \"get\",\n \"namespace\": \"ns1\",\n \"pathname\": \"get\",\n \"description\": \"Get the current value\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"output\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ],\n \"details\": {\n \"stateMutability\": \"viewable\"\n }\n },\n {\n \"id\": \"fc6f54ee-2e3c-4e56-b17c-4a1a0ae7394b\",\n \"interface\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"name\": \"set\",\n \"namespace\": \"ns1\",\n \"pathname\": \"set\",\n \"description\": \"Set the value\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": [],\n \"details\": {\n \"stateMutability\": \"payable\"\n }\n }\n ],\n \"events\": [\n {\n \"id\": \"9f653f93-86f4-45bc-be75-d7f5888fbbc0\",\n \"interface\": \"c35d3449-4f24-4676-8e64-91c9e46f06c4\",\n \"namespace\": \"ns1\",\n \"pathname\": \"Changed\",\n \"signature\": \"Changed(address,uint256)\",\n \"name\": \"Changed\",\n \"description\": \"Emitted when the value changes\",\n \"params\": [\n {\n \"name\": \"_from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"_value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ]\n }\n ],\n \"published\": false\n}\n"},{"location":"reference/types/ffi/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the FireFly interface (FFI) smart contract definition UUID message The UUID of the broadcast message that was used to publish this FFI to the network UUID namespace The namespace of the FFI string name The name of the FFI - usually matching the smart contract name string networkName The published name of the FFI within the multiparty network string description A description of the smart contract this FFI represents string version A version for the FFI - use of semantic versioning such as 'v1.0.1' is encouraged string methods An array of smart contract method definitions FFIMethod[] events An array of smart contract event definitions FFIEvent[] errors An array of smart contract error definitions FFIError[] published Indicates if the FFI is published to other members of the multiparty network bool"},{"location":"reference/types/ffi/#ffimethod","title":"FFIMethod","text":"Field Name Description Type id The UUID of the FFI method definition UUID interface The UUID of the FFI smart contract definition that this method is part of UUID name The name of the method string namespace The namespace of the FFI string pathname The unique name allocated to this method within the FFI for use on URL paths. Supports contracts that have multiple method overrides with the same name string description A description of the smart contract method string params An array of method parameter/argument definitions FFIParam[] returns An array of method return definitions FFIParam[] details Additional blockchain specific fields about this method from the original smart contract. Used by the blockchain plugin and for documentation generation. JSONObject"},{"location":"reference/types/ffi/#ffiparam","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/ffi/#ffievent","title":"FFIEvent","text":"Field Name Description Type id The UUID of the FFI event definition UUID interface The UUID of the FFI smart contract definition that this event is part of UUID namespace The namespace of the FFI string pathname The unique name allocated to this event within the FFI for use on URL paths. Supports contracts that have multiple event overrides with the same name string signature The stringified signature of the event, as computed by the blockchain plugin string name The name of the event string description A description of the smart contract event string params An array of event parameter/argument definitions FFIParam[] details Additional blockchain specific fields about this event from the original smart contract. Used by the blockchain plugin and for documentation generation. JSONObject"},{"location":"reference/types/ffi/#ffiparam_1","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/ffi/#ffierror","title":"FFIError","text":"Field Name Description Type id The UUID of the FFI error definition UUID interface The UUID of the FFI smart contract definition that this error is part of UUID namespace The namespace of the FFI string pathname The unique name allocated to this error within the FFI for use on URL paths string signature The stringified signature of the error, as computed by the blockchain plugin string name The name of the error string description A description of the smart contract error string params An array of error parameter/argument definitions FFIParam[]"},{"location":"reference/types/ffi/#ffiparam_2","title":"FFIParam","text":"Field Name Description Type name The name of the parameter. Note that parameters must be ordered correctly on the FFI, according to the order in the blockchain smart contract string schema FireFly uses an extended subset of JSON Schema to describe parameters, similar to OpenAPI/Swagger. Converters are available for native blockchain interface definitions / type systems - such as an Ethereum ABI. See the documentation for more detail JSONAny"},{"location":"reference/types/group/","title":"Group","text":"A privacy group is a list of identities that should receive a private communication.
When you send a private message, you can specify the list of participants in-line and it will be resolved to a group. Or you can reference the group using its identifying hash.
The sender of a message must be included in the group along with the other participants. The sender receives an event confirming the message, just as any other participant would do.
The sender is included automatically in the group when members are specified in-line, if it is omitted.
"},{"location":"reference/types/group/#group-identity-hash","title":"Group identity hash","text":"The identifying hash for a group is determined as follows:
namespace, name, and members array are then serialized into a JSON object, without whitespace.The mechanism that keeps data private and ordered, without leaking data to the blockchain, is summarized in the below diagram.
The key points are:
name of the group can be used as an additional salt in generation of the group hashtopic+group) is combined with a nonce that is incremented for each individual sender, to form a message-specific hash.See NextPin for more information on the structure used for storing the next expected masked context pin, for each member of the privacy group.
"},{"location":"reference/types/group/#example","title":"Example","text":"{\n \"namespace\": \"ns1\",\n \"name\": \"\",\n \"members\": [\n {\n \"identity\": \"did:firefly:org/1111\",\n \"node\": \"4f563179-b4bd-4161-86e0-c2c1c0869c4f\"\n },\n {\n \"identity\": \"did:firefly:org/2222\",\n \"node\": \"61a99af8-c1f7-48ea-8fcc-489e4822a0ed\"\n }\n ],\n \"localNamespace\": \"ns1\",\n \"message\": \"0b9dfb76-103d-443d-92fd-b114fe07c54d\",\n \"hash\": \"c52ad6c034cf5c7382d9a294f49297096a52eb55cc2da696c564b2a276633b95\",\n \"created\": \"2022-05-16T01:23:16Z\"\n}\n"},{"location":"reference/types/group/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type namespace The namespace of the group within the multiparty network string name The optional name of the group, allowing multiple unique groups to exist with the same list of recipients string members The list of members in this privacy group Member[] localNamespace The local namespace of the group string message The message used to broadcast this group privately to the members UUID hash The identifier hash of this group. Derived from the name and group members Bytes32 created The time when the group was first used to send a message in the network FFTime"},{"location":"reference/types/group/#member","title":"Member","text":"Field Name Description Type identity The DID of the group member string node The UUID of the node that receives a copy of the off-chain message for the identity UUID"},{"location":"reference/types/identity/","title":"Identity","text":"FireFly contains an address book of identities, which is managed in a decentralized way across a multi-party system through claim and verification system.
See FIR-12 for evolution that is happening to Hyperledger FireFly to allow:
Root identities are registered with only a claim - which is a signed transaction from a particular blockchain account, to bind a DID with a name that is unique within the network, to that signing key.
The signing key then becomes a Verifier for that identity. Using that key the root identity can be used to register a new FireFly node in the network, send and receive messages, and register child identities.
When child identities are registered, a claim using a key that is going to be the Verifier for that child identity is required. However, this is insufficient to establish that identity as a child identity of the parent. There must be an additional verification that references the claim (by UUID) using the key verifier of the parent identity.
FireFly has adopted the DID standard for representing identities. A \"DID Method\" name of firefly is used to represent that the built-in identity system of Hyperledger FireFly is being used to resolve these DIDs.
So an example FireFly DID for organization abcd1234 is:
did:firefly:org/abcd1234The adoption of DIDs in Hyperledger FireFly v1.0 is also a stepping stone to allowing pluggable DID based identity resolvers into FireFly in the future.
You can also download a DID Document for a FireFly identity, which represents the verifiers and other information about that identity according to the JSON format in the DID standard.
"},{"location":"reference/types/identity/#example","title":"Example","text":"{\n \"id\": \"114f5857-9983-46fb-b1fc-8c8f0a20846c\",\n \"did\": \"did:firefly:org/org_1\",\n \"type\": \"org\",\n \"parent\": \"688072c3-4fa0-436c-a86b-5d89673b8938\",\n \"namespace\": \"ff_system\",\n \"name\": \"org_1\",\n \"messages\": {\n \"claim\": \"911b364b-5863-4e49-a3f8-766dbbae7c4c\",\n \"verification\": \"24636f11-c1f9-4bbb-9874-04dd24c7502f\",\n \"update\": null\n },\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/identity/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the identity UUID did The DID of the identity. Unique across namespaces within a FireFly network string type The type of the identity FFEnum:\"org\"\"node\"\"custom\" parent The UUID of the parent identity. Unset for root organization identities UUID namespace The namespace of the identity. Organization and node identities are always defined in the ff_system namespace string name The name of the identity. The name must be unique within the type and namespace string description A description of the identity. Part of the updatable profile information of an identity string profile A set of metadata for the identity. Part of the updatable profile information of an identity JSONObject messages References to the broadcast messages that established this identity and proved ownership of the associated verifiers (keys) IdentityMessages created The creation time of the identity FFTime updated The last update time of the identity profile FFTime"},{"location":"reference/types/identity/#identitymessages","title":"IdentityMessages","text":"Field Name Description Type claim The UUID of claim message UUID verification The UUID of claim message. Unset for root organization identities UUID update The UUID of the most recently applied update message. Unset if no updates have been confirmed UUID"},{"location":"reference/types/message/","title":"Message","text":"Message is the envelope by which coordinated data exchange can happen between parties in the network. Data is passed by reference in these messages, and a chain of hashes covering the data and the details of the message, provides a verification against tampering.
A message is made up of three sections:
Sections (1) and (2) are fixed once the message is sent, and a hash is generated that provides tamper protection.
The hash is a function of the header, and all of the data payloads. Calculated as follows:
[{\"id\":\"{{DATA_UUID}}\",\"hash\":\"{{DATA_HASH}}\"}] is hashed, and that hash is stored in header.datahashheader is serialized as JSON with the deterministic order (listed below) and hashedEach node independently calculates the hash, and the hash is included in the manifest of the Batch by the node that sends the message. Because the hash of that batch manifest is included in the blockchain transaction, a message transferred to a node that does not match the original message hash is rejected.
"},{"location":"reference/types/message/#tag","title":"Tag","text":"The header.tag tells the processors of the message how it should be processed, and what data they should expect it to contain.
If you think of your decentralized application like a state machine, then you need to have a set of well defined transitions that can be performed between states. Each of these transitions that requires off-chain transfer of private data (optionally coordinated with an on-chain transaction) should be expressed as a type of message, with a particular tag.
Every copy of the application that runs in the participants of the network should look at this tag to determine what logic to execute against it.
Note: For consistency in ordering, the sender should also wait to process the state machine transitions associated with the message they send until it is ordered by the blockchain. They should not consider themselves special because they sent the message, and process it immediately - otherwise they could end up processing it in a different order to other parties in the network that are also processing the message.
"},{"location":"reference/types/message/#topics","title":"Topics","text":"The header.topics strings allow you to set the the ordering context for each message you send, and you are strongly encouraged to set it explicitly on every message you send (falling back to the default topic is not recommended).
A key difference between blockchain backed decentralized applications and other event-driven applications, is that there is a single source of truth for the order in which things happen.
In a multi-party system with off-chain transfer of data as well as on-chain transfer of data, the two sets of data need to be coordinated together. The off-chain transfer might happen at different times, and is subject to the reliability of the parties & network links involved in that off-chain communication.
A \"stop the world\" approach to handling a single piece of missing data is not practical for a high volume production business network.
The ordering context is a function of:
topic of the messageWhen an on-chain transaction is detected by FireFly, it can determine the above ordering - noting that privacy is preserved for private messages by masking this ordering context message-by-message with a nonce and the group ID, so that only the participants in that group can decode the ordering context.
If a piece of off-chain data is unavailable, then the FireFly node will block only streams of data that are associated with that ordering context.
For your application, you should choose the most granular identifier you can for your topic to minimize the scope of any blockage if one item of off-chain data fails to be delivered or is delayed. Some good examples are:
There are some advanced scenarios where you need to merge streams of ordered data, so that two previously separately ordered streams of communication (different state machines) are joined together to process a critical decision/transition in a deterministic order.
A synchronization point between two otherwise independent streams of communication.
To do this, simply specify two topics in the message you sent, and the message will be independently ordered against both of those topics.
You will also receive two events for the confirmation of that message, one for each topic.
Some examples:
000001 and 000002, by discarding business transaction 000001 as a duplicatetopics: [\"000001\",\"000002\"] on the special merge message, and then from that point onwards you would only need to specify topics: [\"000002\"].id1 and id2, into a merged entity with id3.topics: [\"id1\",\"id2\",\"id3\"] on the special merge message, and then from that point onwards you would only need to specify topics: [\"id3\"].By default messages are pinned to the blockchain, within a Batch.
For private messages, you can choose to disable this pinning by setting header.txtype: \"unpinned\".
Broadcast messages must be pinned to the blockchain.
"},{"location":"reference/types/message/#in-line-data","title":"In-line data","text":"When sending a message you can specify the array of Data attachments in-line, as part of the same JSON payload.
For example, a minimal broadcast message could be:
{\n \"data\": [\n {\"value\": \"hello world\"}\n ]\n}\n When you send this message with /api/v1/namespaces/{ns}/messages/broadcast:
header will be initialized with the default values, including txtype: \"batch_pin\"data[0] entry will be stored as a Data resource{\n \"header\": {\n \"id\": \"4ea27cce-a103-4187-b318-f7b20fd87bf3\",\n \"cid\": \"00d20cba-76ed-431d-b9ff-f04b4cbee55c\",\n \"type\": \"private\",\n \"txtype\": \"batch_pin\",\n \"author\": \"did:firefly:org/acme\",\n \"key\": \"0xD53B0294B6a596D404809b1d51D1b4B3d1aD4945\",\n \"created\": \"2022-05-16T01:23:10Z\",\n \"namespace\": \"ns1\",\n \"group\": \"781caa6738a604344ae86ee336ada1b48a404a85e7041cf75b864e50e3b14a22\",\n \"topics\": [\n \"topic1\"\n ],\n \"tag\": \"blue_message\",\n \"datahash\": \"c07be180b147049baced0b6219d9ce7a84ab48f2ca7ca7ae949abb3fe6491b54\"\n },\n \"localNamespace\": \"ns1\",\n \"state\": \"confirmed\",\n \"confirmed\": \"2022-05-16T01:23:16Z\",\n \"data\": [\n {\n \"id\": \"fdf9f118-eb81-4086-a63d-b06715b3bb4e\",\n \"hash\": \"34cf848d896c83cdf433ea7bd9490c71800b316a96aac3c3a78a42a4c455d67d\"\n }\n ]\n}\n"},{"location":"reference/types/message/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type header The message header contains all fields that are used to build the message hash MessageHeader localNamespace The local namespace of the message string hash The hash of the message. Derived from the header, which includes the data hash Bytes32 batch The UUID of the batch in which the message was pinned/transferred UUID txid The ID of the transaction used to order/deliver this message UUID state The current state of the message FFEnum:\"staged\"\"ready\"\"sent\"\"pending\"\"confirmed\"\"rejected\"\"cancelled\" confirmed The timestamp of when the message was confirmed/rejected FFTime rejectReason If a message was rejected, provides details on the rejection reason string data The list of data elements attached to the message DataRef[] pins For private messages, a unique pin hash:nonce is assigned for each topic string[] idempotencyKey An optional unique identifier for a message. Cannot be duplicated within a namespace, thus allowing idempotent submission of messages to the API. Local only - not transferred when the message is sent to other members of the network IdempotencyKey"},{"location":"reference/types/message/#messageheader","title":"MessageHeader","text":"Field Name Description Type id The UUID of the message. Unique to each message UUID cid The correlation ID of the message. Set this when a message is a response to another message UUID type The type of the message FFEnum:\"definition\"\"broadcast\"\"private\"\"groupinit\"\"transfer_broadcast\"\"transfer_private\"\"approval_broadcast\"\"approval_private\" txtype The type of transaction used to order/deliver this message FFEnum:\"none\"\"unpinned\"\"batch_pin\"\"network_action\"\"token_pool\"\"token_transfer\"\"contract_deploy\"\"contract_invoke\"\"contract_invoke_pin\"\"token_approval\"\"data_publish\" author The DID of identity of the submitter string key The on-chain signing key used to sign the transaction string created The creation time of the message FFTime namespace The namespace of the message within the multiparty network string topics A message topic associates this message with an ordered stream of data. A custom topic should be assigned - using the default topic is discouraged string[] tag The message tag indicates the purpose of the message to the applications that process it string datahash A single hash representing all data in the message. Derived from the array of data ids+hashes attached to this message Bytes32 txparent The parent transaction that originally triggered this message TransactionRef"},{"location":"reference/types/message/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/message/#dataref","title":"DataRef","text":"Field Name Description Type id The UUID of the referenced data resource UUID hash The hash of the referenced data Bytes32"},{"location":"reference/types/namespace/","title":"Namespace","text":"A namespace is a logical isolation domain for different applications, or tenants, that share the FireFly node.
Significant evolution of the Hyperledger FireFly namespace construct, is proposed under FIR-12
"},{"location":"reference/types/namespace/#example","title":"Example","text":"{\n \"name\": \"default\",\n \"networkName\": \"default\",\n \"description\": \"Default predefined namespace\",\n \"created\": \"2022-05-16T01:23:16Z\"\n}\n"},{"location":"reference/types/namespace/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type name The local namespace name string networkName The shared namespace name within the multiparty network string description A description of the namespace string created The time the namespace was created FFTime"},{"location":"reference/types/nextpin/","title":"NextPin","text":"Next-pins are maintained by each member of a privacy group, in order to detect if a on-chain transaction with a given \"pin\" for a message represents the next message for any member of the privacy group.
This allows every member to maintain a global order of transactions within a topic in a privacy group, without leaking the same hash between the messages that are communicated in that group.
See Group for more information on privacy groups.
"},{"location":"reference/types/nextpin/#example","title":"Example","text":"{\n \"namespace\": \"ns1\",\n \"context\": \"a25b65cfe49e5ed78c256e85cf07c96da938144f12fcb02fe4b5243a4631bd5e\",\n \"identity\": \"did:firefly:org/example\",\n \"hash\": \"00e55c63905a59782d5bc466093ead980afc4a2825eb68445bcf1312cc3d6de2\",\n \"nonce\": 12345\n}\n"},{"location":"reference/types/nextpin/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type namespace The namespace of the next-pin string context The context the next-pin applies to - the hash of the privacy group-hash + topic. The group-hash is only known to the participants (can itself contain a salt in the group-name). This context is combined with the member and nonce to determine the final hash that is written on-chain Bytes32 identity The member of the privacy group the next-pin applies to string hash The unique masked pin string Bytes32 nonce The numeric index - which is monotonically increasing for each member of the privacy group int64"},{"location":"reference/types/operation/","title":"Operation","text":"Operations are stateful external actions that FireFly triggers via plugins. They can succeed or fail. They are grouped into Transactions in order to accomplish a single logical task.
The diagram below shows the different types of operation that are performed by each FireFly plugin type. The color coding (and numbers) map those different types of operation to the Transaction types that include those operations.
"},{"location":"reference/types/operation/#operation-status","title":"Operation status","text":"When initially created an operation is in Initialized state. Once the operation has been successfully sent to its respective plugin to be processed its status moves to Pending state. This indicates that the plugin is processing the operation. The operation will then move to Succeeded or Failed state depending on the outcome.
In the event that an operation could not be submitted to the plugin for processing, for example because the plugin's microservice was temporarily unavailable, the operation will remain in Initialized state. Re-submitting the same FireFly API call using the same idempotency key will cause FireFly to re-submit the operation to its plugin.
{\n \"id\": \"04a8b0c4-03c2-4935-85a1-87d17cddc20a\",\n \"namespace\": \"ns1\",\n \"tx\": \"99543134-769b-42a8-8be4-a5f8873f969d\",\n \"type\": \"sharedstorage_upload_batch\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ipfs\",\n \"input\": {\n \"id\": \"80d89712-57f3-48fe-b085-a8cba6e0667d\"\n },\n \"output\": {\n \"payloadRef\": \"QmWj3tr2aTHqnRYovhS2mQAjYneRtMWJSU4M4RdAJpJwEC\"\n },\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/operation/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the operation UUID namespace The namespace of the operation string tx The UUID of the FireFly transaction the operation is part of UUID type The type of the operation FFEnum:\"blockchain_pin_batch\"\"blockchain_network_action\"\"blockchain_deploy\"\"blockchain_invoke\"\"sharedstorage_upload_batch\"\"sharedstorage_upload_blob\"\"sharedstorage_upload_value\"\"sharedstorage_download_batch\"\"sharedstorage_download_blob\"\"dataexchange_send_batch\"\"dataexchange_send_blob\"\"token_create_pool\"\"token_activate_pool\"\"token_transfer\"\"token_approval\" status The current status of the operation OpStatus plugin The plugin responsible for performing the operation string input The input to this operation JSONObject output Any output reported back from the plugin for this operation JSONObject error Any error reported back from the plugin for this operation string created The time the operation was created FFTime updated The last update time of the operation FFTime retry If this operation was initiated as a retry to a previous operation, this field points to the UUID of the operation being retried UUID"},{"location":"reference/types/operationwithdetail/","title":"OperationWithDetail","text":"Operation with detail is an extension to operations that allow additional information to be encapsulated with an operation. An operation can be supplemented by a connector and that information will be returned in the detail field.
{\n \"id\": \"04a8b0c4-03c2-4935-85a1-87d17cddc20a\",\n \"namespace\": \"ns1\",\n \"tx\": \"99543134-769b-42a8-8be4-a5f8873f969d\",\n \"type\": \"sharedstorage_upload_batch\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ipfs\",\n \"input\": {\n \"id\": \"80d89712-57f3-48fe-b085-a8cba6e0667d\"\n },\n \"output\": {\n \"payloadRef\": \"QmWj3tr2aTHqnRYovhS2mQAjYneRtMWJSU4M4RdAJpJwEC\"\n },\n \"created\": \"2022-05-16T01:23:15Z\",\n \"detail\": {\n \"created\": \"2023-01-27T17:04:24.26406392Z\",\n \"firstSubmit\": \"2023-01-27T17:04:24.419913295Z\",\n \"gas\": \"4161076\",\n \"gasPrice\": \"0\",\n \"history\": [\n {\n \"actions\": [\n {\n \"action\": \"AssignNonce\",\n \"count\": 1,\n \"lastOccurrence\": \"\",\n \"time\": \"\"\n },\n {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:11:41.161213303Z\",\n \"time\": \"2023-01-27T17:11:41.161213303Z\"\n },\n {\n \"action\": \"Submit\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:11:41.222374636Z\",\n \"time\": \"2023-01-27T17:11:41.222374636Z\"\n }\n ],\n \"subStatus\": \"Received\",\n \"time\": \"2023-01-27T17:11:41.122965803Z\"\n },\n {\n \"actions\": [\n {\n \"action\": \"ReceiveReceipt\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:11:47.930332625Z\",\n \"time\": \"2023-01-27T17:11:47.930332625Z\"\n },\n {\n \"action\": \"Confirm\",\n \"count\": 1,\n \"lastOccurrence\": \"2023-01-27T17:12:02.660275549Z\",\n \"time\": \"2023-01-27T17:12:02.660275549Z\"\n }\n ],\n \"subStatus\": \"Tracking\",\n \"time\": \"2023-01-27T17:11:41.222400219Z\"\n },\n {\n \"actions\": [],\n \"subStatus\": \"Confirmed\",\n \"time\": \"2023-01-27T17:12:02.660309382Z\"\n }\n ],\n \"historySummary\": [\n {\n \"count\": 1,\n \"subStatus\": \"Received\"\n },\n {\n \"action\": \"AssignNonce\",\n \"count\": 1\n },\n {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1\n },\n {\n \"action\": \"Submit\",\n \"count\": 1\n },\n {\n \"count\": 1,\n \"subStatus\": \"Tracking\"\n },\n {\n \"action\": \"ReceiveReceipt\",\n \"count\": 1\n },\n {\n \"action\": \"Confirm\",\n \"count\": 1\n },\n {\n \"count\": 1,\n \"subStatus\": \"Confirmed\"\n }\n ],\n \"sequenceId\": \"0185f42f-fec8-93df-aeba-387417d477e0\",\n \"status\": \"Succeeded\",\n \"transactionHash\": \"0xfb39178fee8e725c03647b8286e6f5cb13f982abf685479a9ee59e8e9d9e51d8\"\n }\n}\n"},{"location":"reference/types/operationwithdetail/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the operation UUID namespace The namespace of the operation string tx The UUID of the FireFly transaction the operation is part of UUID type The type of the operation FFEnum:\"blockchain_pin_batch\"\"blockchain_network_action\"\"blockchain_deploy\"\"blockchain_invoke\"\"sharedstorage_upload_batch\"\"sharedstorage_upload_blob\"\"sharedstorage_upload_value\"\"sharedstorage_download_batch\"\"sharedstorage_download_blob\"\"dataexchange_send_batch\"\"dataexchange_send_blob\"\"token_create_pool\"\"token_activate_pool\"\"token_transfer\"\"token_approval\" status The current status of the operation OpStatus plugin The plugin responsible for performing the operation string input The input to this operation JSONObject output Any output reported back from the plugin for this operation JSONObject error Any error reported back from the plugin for this operation string created The time the operation was created FFTime updated The last update time of the operation FFTime retry If this operation was initiated as a retry to a previous operation, this field points to the UUID of the operation being retried UUID detail Additional detailed information about an operation provided by the connector ``"},{"location":"reference/types/simpletypes/","title":"Simple Types","text":""},{"location":"reference/types/simpletypes/#uuid","title":"UUID","text":"IDs are generated as UUID V4 globally unique identifiers
"},{"location":"reference/types/simpletypes/#fftime","title":"FFTime","text":"Times are serialized to JSON on the API in RFC 3339 / ISO 8601 nanosecond UTC time for example 2022-05-05T21:19:27.454767543Z.
Note that JavaScript can parse this format happily into millisecond time with Date.parse().
Times are persisted as a nanosecond resolution timestamps in the database.
On input, and in queries, times can be parsed from RFC3339, or unix timestamps (second, millisecond or nanosecond resolution).
"},{"location":"reference/types/simpletypes/#ffbigint","title":"FFBigInt","text":"Large integers of up to 256bits in size are common in blockchain, and handled in FireFly.
In JSON output payloads in FireFly, including events, they are serialized as strings (with base 10).
On input you can provide JSON string (string with an 0x prefix are parsed at base 16), or a JSON number.
v1.3.1","text":"In versions of FireFly up to and including v1.3.1, be careful when using large JSON numbers. The largest number that is safe to transfer using a JSON number is 2^53 - 1 and it is possible to receive errors from the transaction manager, or for precision to be silently lost when passing numeric parameters larger than that. It is recommended to pass large numbers as strings to avoid loss of precision.
v1.3.2 and higher","text":"In FireFly v1.3.2 support was added for 256-bit precision JSON numbers. Some application frameworks automatically serialize large JSON numbers to a string which FireFly already supports, but there is no upper limit to the size of a number that can be represented in JSON. FireFly now supports much larger JSON numbers, up to 256-bit precision. For example the following input parameter to a contract constructor is now supported:
...\n \"definition\": [{\n \"inputs\": [\n {\n \"internalType\":\" uint256\",\n \"name\": \"x\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\":[],\n \"type\":\"constructor\"\n }],\n \"params\": [ 10000000000000000000000000 ]\n ...\n Some application frameworks seralize large numbers in scientific notation e.g. 1e+25. FireFly v1.3.2 added supported for handling scientific numbers in parameters. This removes the need to change an application that uses this number format. For example the following input parameter to a contract constructor is now supported:
...\n \"definition\": [{\n \"inputs\": [\n {\n \"internalType\":\" uint256\",\n \"name\": \"x\",\n \"type\": \"uint256\"\n }\n ],\n \"outputs\":[],\n \"type\":\"constructor\"\n }],\n \"params\": [ 1e+25 ]\n ...\n"},{"location":"reference/types/simpletypes/#jsonany","title":"JSONAny","text":"Any JSON type. An object, array, string, number, boolean or null.
FireFly stores object data with the same field order as was provided on the input, but with any whitespace removed.
"},{"location":"reference/types/simpletypes/#jsonobject","title":"JSONObject","text":"Any JSON Object. Must be an object, rather than an array or a simple type.
"},{"location":"reference/types/subscription/","title":"Subscription","text":"Each Subscription tracks delivery of events to a particular application, and allows FireFly to ensure that messages are delivered reliably to that application.
"},{"location":"reference/types/subscription/#creating-a-subscription","title":"Creating a subscription","text":"Before you can connect to a subscription, you must create it via the REST API.
One special case where you do not need to do this, is Ephemeral WebSocket connections (described below). For these you can just connect and immediately start receiving events.
When creating a new subscription, you give it a name which is how you will refer to it when you connect.
You are also able to specify server-side filtering that should be performed against the event stream, to limit the set of events that are sent to your application.
All subscriptions are created within a namespace, and automatically filter events to only those emitted within that namespace.
You can create multiple subscriptions for your application, to request different sets of server-side filtering for events. You can then request FireFly to deliver events for both subscriptions over the same WebSocket (if you are using the WebSocket transport). However, delivery order is not assured between two subscriptions.
"},{"location":"reference/types/subscription/#subscriptions-and-workload-balancing","title":"Subscriptions and workload balancing","text":"You can have multiple scaled runtime instances of a single application, all running in parallel. These instances of the application all share a single subscription.
Each event is only delivered once to the subscription, regardless of how many instances of your application connect to FireFly.
With multiple WebSocket connections active on a single subscription, each event might be delivered to different instance of your application. This means workload is balanced across your instances. However, each event still needs to be acknowledged, so delivery processing order can still be maintained within your application database state.
If you have multiple different applications all needing their own copy of the same event, then you need to configure a separate subscription for each application.
"},{"location":"reference/types/subscription/#pluggable-transports","title":"Pluggable Transports","text":"Hyperledger FireFly has two built-in transports for delivery of events to applications - WebSockets and Webhooks.
The event interface is fully pluggable, so you can extend connectivity over an external event bus - such as NATS, Apache Kafka, Rabbit MQ, Redis etc.
"},{"location":"reference/types/subscription/#websockets","title":"WebSockets","text":"If your application has a back-end server runtime, then WebSockets are the most popular option for listening to events. WebSockets are well supported by all popular application development frameworks, and are very firewall friendly for connecting applications into your FireFly server.
Check out the @hyperledger/firefly-sdk SDK for Node.js applications, and the hyperledger/firefly-common module for Golang applications. These both contain reliable WebSocket clients for your event listeners.
A Java SDK is a roadmap item for the community.
"},{"location":"reference/types/subscription/#websocket-protocol","title":"WebSocket protocol","text":"FireFly has a simple protocol on top of WebSockets:
namespace and name query parameter in the URL when you connect, along with query params for other fields of WSStartautoack in step (1)The SDK libraries for FireFly help you ensure you send the start payload each time your WebSocket reconnects.
start and ack explicitly","text":"Here's an example websocat command showing an explicit start and ack.
$ websocat ws://localhost:5000/ws\n{\"type\":\"start\",\"namespace\":\"default\",\"name\":\"docexample\"}\n# ... for each event that arrives here, you send an ack ...\n{\"type\":\"ack\",\"id\":\"70ed4411-57cf-4ba1-bedb-fe3b4b5fd6b6\"}\n When creating your subscription, you can set readahead in order to ask FireFly to stream a number of messages to your application, ahead of receiving the acknowledgements.
readahead can be a powerful tool to increase performance, but does require your application to ensure it processes events in the correct order and sends exactly one ack for each event.
autoack","text":"Here's an example websocat where we use URL query parameters to avoid the need to send a start JSON payload.
We also use autoack so that events just keep flowing from the server.
$ websocat \"ws://localhost:5000/ws?namespace=default&name=docexample&autoack\"\n# ... events just keep arriving here, as the server-side auto-acknowledges\n# the events as it delivers them to you.\n Note using autoack means you can miss events in the case of a disconnection, so should not be used for production applications that require at-least-once delivery.
FireFly WebSockets provide a special option to create a subscription dynamically, that only lasts for as long as you are connected to the server.
We call these ephemeral subscriptions.
Here's an example websocat command showing an an ephemeral subscription - notice we don't specify a name for the subscription, and there is no need to have already created the subscription beforehand.
Here we also include an extra query parameter to set a server-side filter, to only include message events.
$ websocat \"ws://localhost:5000/ws?namespace=default&ephemeral&autoack&filter.events=message_.*\"\n{\"type\":\"start\",\"namespace\":\"default\",\"name\":\"docexample\"}\n# ... for each event that arrives here, you send an ack ...\n{\"type\":\"ack\",\"id\":\"70ed4411-57cf-4ba1-bedb-fe3b4b5fd6b6\"}\n Ephemeral subscriptions are very convenient for experimentation, debugging and monitoring. However, they do not give reliable delivery because you only receive events that occur while you are connected. If you disconnect and reconnect, you will miss all events that happened while your application was not listening.
"},{"location":"reference/types/subscription/#webhooks","title":"Webhooks","text":"The Webhook transport allows FireFly to make HTTP calls against your application's API when events matching your subscription are emitted.
This means the direction of network connection is from the FireFly server, to the application (the reverse of WebSockets). Conversely it means you don't need to add any connection management code to your application - just expose and API that FireFly can call to process the events.
Webhooks are great for serverless functions (AWS Lambda etc.), integrations with SaaS applications, and calling existing APIs.
The FireFly configuration options for a Webhook subscription are very flexible, allowing you to customize your HTTP requests as follows:
2xx HTTP status code or other error, you should enable and configure options.retryfastack to acknowledge against FireFly immediately and make multiple parallel calls to the HTTP API in a fire-and-forget fashion.message_confirmed events:data element in message eventswithData to be set on the subscription, in addition to the input.* configuration optionsmessage_confirmed events:cid and topic in the reply message to match the requesttag in the reply message, per the configuration, or dynamically based on a field in the input request data.Webhooks have the ability to batch events into a single HTTP request instead of sending an event per HTTP request. The interface will be a JSON array of events instead of a top level JSON object with a single event. The size of the batch will be set by the readAhead limit and an optional timeout can be specified to send the events when the batch hasn't filled.
To enable this set the following configuration under SubscriptionOptions
batch | Events are delivered in batches in an ordered array. The batch size is capped to the readAhead limit. The event payload is always an array even if there is a single event in the batch. Commonly used with Webhooks to allow events to be delivered and acknowledged in batches. | bool |
batchTimeout | When batching is enabled, the optional timeout to send events even when the batch hasn't filled. Defaults to 2 seconds | string
NOTE: When batch is enabled, withData cannot be used as these may alter the HTTP request based on a single event and in batching it does not make sense for now.
{\n \"id\": \"c38d69fd-442e-4d6f-b5a4-bab1411c7fe8\",\n \"namespace\": \"ns1\",\n \"name\": \"app1\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"^(message_.*|token_.*)$\",\n \"message\": {\n \"tag\": \"^(red|blue)$\"\n },\n \"transaction\": {},\n \"blockchainevent\": {}\n },\n \"options\": {\n \"firstEvent\": \"newest\",\n \"readAhead\": 50\n },\n \"created\": \"2022-05-16T01:23:15Z\",\n \"updated\": null\n}\n"},{"location":"reference/types/subscription/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the subscription UUID namespace The namespace of the subscription. A subscription will only receive events generated in the namespace of the subscription string name The name of the subscription. The application specifies this name when it connects, in order to attach to the subscription and receive events that arrived while it was disconnected. If multiple apps connect to the same subscription, events are workload balanced across the connected application instances string transport The transport plugin responsible for event delivery (WebSockets, Webhooks, JMS, NATS etc.) string filter Server-side filter to apply to events SubscriptionFilter options Subscription options SubscriptionOptions ephemeral Ephemeral subscriptions only exist as long as the application is connected, and as such will miss events that occur while the application is disconnected, and cannot be created administratively. You can create one over over a connected WebSocket connection bool created Creation time of the subscription FFTime updated Last time the subscription was updated FFTime"},{"location":"reference/types/subscription/#subscriptionfilter","title":"SubscriptionFilter","text":"Field Name Description Type events Regular expression to apply to the event type, to subscribe to a subset of event types string message Filters specific to message events. If an event is not a message event, these filters are ignored MessageFilter transaction Filters specific to events with a transaction. If an event is not associated with a transaction, this filter is ignored TransactionFilter blockchainevent Filters specific to blockchain events. If an event is not a blockchain event, these filters are ignored BlockchainEventFilter topic Regular expression to apply to the topic of the event, to subscribe to a subset of topics. Note for messages sent with multiple topics, a separate event is emitted for each topic string topics Deprecated: Please use 'topic' instead string tag Deprecated: Please use 'message.tag' instead string group Deprecated: Please use 'message.group' instead string author Deprecated: Please use 'message.author' instead string"},{"location":"reference/types/subscription/#messagefilter","title":"MessageFilter","text":"Field Name Description Type tag Regular expression to apply to the message 'header.tag' field string group Regular expression to apply to the message 'header.group' field string author Regular expression to apply to the message 'header.author' field string"},{"location":"reference/types/subscription/#transactionfilter","title":"TransactionFilter","text":"Field Name Description Type type Regular expression to apply to the transaction 'type' field string"},{"location":"reference/types/subscription/#blockchaineventfilter","title":"BlockchainEventFilter","text":"Field Name Description Type name Regular expression to apply to the blockchain event 'name' field, which is the name of the event in the underlying blockchain smart contract string listener Regular expression to apply to the blockchain event 'listener' field, which is the UUID of the event listener. So you can restrict your subscription to certain blockchain listeners. Alternatively to avoid your application need to know listener UUIDs you can set the 'topic' field of blockchain event listeners, and use a topic filter on your subscriptions string"},{"location":"reference/types/subscription/#subscriptionoptions","title":"SubscriptionOptions","text":"Field Name Description Type firstEvent Whether your application would like to receive events from the 'oldest' event emitted by your FireFly node (from the beginning of time), or the 'newest' event (from now), or a specific event sequence. Default is 'newest' SubOptsFirstEvent readAhead The number of events to stream ahead to your application, while waiting for confirmation of consumption of those events. At least once delivery semantics are used in FireFly, so if your application crashes/reconnects this is the maximum number of events you would expect to be redelivered after it restarts uint withData Whether message events delivered over the subscription, should be packaged with the full data of those messages in-line as part of the event JSON payload. Or if the application should make separate REST calls to download that data. May not be supported on some transports. bool batch Events are delivered in batches in an ordered array. The batch size is capped to the readAhead limit. The event payload is always an array even if there is a single event in the batch, allowing client-side optimizations when processing the events in a group. Available for both Webhooks and WebSockets. bool batchTimeout When batching is enabled, the optional timeout to send events even when the batch hasn't filled. string fastack Webhooks only: When true the event will be acknowledged before the webhook is invoked, allowing parallel invocations bool url Webhooks only: HTTP url to invoke. Can be relative if a base URL is set in the webhook plugin config string method Webhooks only: HTTP method to invoke. Default=POST string json Webhooks only: Whether to assume the response body is JSON, regardless of the returned Content-Type bool reply Webhooks only: Whether to automatically send a reply event, using the body returned by the webhook bool replytag Webhooks only: The tag to set on the reply message string replytx Webhooks only: The transaction type to set on the reply message string headers Webhooks only: Static headers to set on the webhook request `` query Webhooks only: Static query params to set on the webhook request `` tlsConfigName The name of an existing TLS configuration associated to the namespace to use string input Webhooks only: A set of options to extract data from the first JSON input data in the incoming message. Only applies if withData=true WebhookInputOptions retry Webhooks only: a set of options for retrying the webhook call WebhookRetryOptions httpOptions Webhooks only: a set of options for HTTP WebhookHTTPOptions"},{"location":"reference/types/subscription/#webhookinputoptions","title":"WebhookInputOptions","text":"Field Name Description Type query A top-level property of the first data input, to use for query parameters string headers A top-level property of the first data input, to use for headers string body A top-level property of the first data input, to use for the request body. Default is the whole first body string path A top-level property of the first data input, to use for a path to append with escaping to the webhook path string replytx A top-level property of the first data input, to use to dynamically set whether to pin the response (so the requester can choose) string"},{"location":"reference/types/subscription/#webhookretryoptions","title":"WebhookRetryOptions","text":"Field Name Description Type enabled Enables retry on HTTP calls, defaults to false bool count Number of times to retry the webhook call in case of failure int initialDelay Initial delay between retries when we retry the webhook call string maxDelay Max delay between retries when we retry the webhookcall string"},{"location":"reference/types/subscription/#webhookhttpoptions","title":"WebhookHTTPOptions","text":"Field Name Description Type proxyURL HTTP proxy URL to use for outbound requests to the webhook string tlsHandshakeTimeout The max duration to hold a TLS handshake alive string requestTimeout The max duration to hold a TLS handshake alive string maxIdleConns The max number of idle connections to hold pooled int idleTimeout The max duration to hold a HTTP keepalive connection between calls string connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted. string expectContinueTimeout See ExpectContinueTimeout in the Go docs string"},{"location":"reference/types/tokenapproval/","title":"TokenApproval","text":"A token approval is a record that an address other than the owner of a token balance, has been granted authority to transfer tokens on the owners behalf.
The approved \"operator\" (or \"spender\") account might be a smart contract, or another individual.
FireFly provides APIs for initiating and tracking approvals, which token connectors integrate with the implementation of the underlying token.
The off-chain index maintained in FireFly for allowance allows you to quickly find the most recent allowance event associated with a pair of keys, using the subject field, combined with the active field. When a new Token Approval event is delivered to FireFly Core by the Token Connector, any previous approval for the same subject is marked \"active\": false, and the new approval is marked with \"active\": true
The token connector is responsible for the format of the subject field to reflect the owner / operator (spender) relationship.
{\n \"localId\": \"1cd3e2e2-dd6a-441d-94c5-02439de9897b\",\n \"pool\": \"1244ecbe-5862-41c3-99ec-4666a18b9dd5\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad\",\n \"operator\": \"0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"approved\": true,\n \"info\": {\n \"owner\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad\",\n \"spender\": \"0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"value\": \"115792089237316195423570985008687907853269984665640564039457584007913129639935\"\n },\n \"namespace\": \"ns1\",\n \"protocolId\": \"000000000032/000000/000000\",\n \"subject\": \"0x55860105d6a675dbe6e4d83f67b834377ba677ad:0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"active\": true,\n \"created\": \"2022-05-16T01:23:15Z\",\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"4b6e086d-0e31-482d-9683-cd18b2045031\"\n }\n}\n"},{"location":"reference/types/tokenapproval/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type localId The UUID of this token approval, in the local FireFly node UUID pool The UUID the token pool this approval applies to UUID connector The name of the token connector, as specified in the FireFly core configuration file. Required on input when there are more than one token connectors configured string key The blockchain signing key for the approval request. On input defaults to the first signing key of the organization that operates the node string operator The blockchain identity that is granted the approval string approved Whether this record grants permission for an operator to perform actions on the token balance (true), or revokes permission (false) bool info Token connector specific information about the approval operation, such as whether it applied to a limited balance of a fungible token. See your chosen token connector documentation for details JSONObject namespace The namespace for the approval, which must match the namespace of the token pool string protocolId An alphanumerically sortable string that represents this event uniquely with respect to the blockchain string subject A string identifying the parties and entities in the scope of this approval, as provided by the token connector string active Indicates if this approval is currently active (only one approval can be active per subject) bool message The UUID of a message that has been correlated with this approval using the data field of the approval in a compatible token connector UUID messageHash The hash of a message that has been correlated with this approval using the data field of the approval in a compatible token connector Bytes32 created The creation time of the token approval FFTime tx If submitted via FireFly, this will reference the UUID of the FireFly transaction (if the token connector in use supports attaching data) TransactionRef blockchainEvent The UUID of the blockchain event UUID config Input only field, with token connector specific configuration of the approval. See your chosen token connector documentation for details JSONObject"},{"location":"reference/types/tokenapproval/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/tokenpool/","title":"TokenPool","text":"Token pools are a FireFly construct for describing a set of tokens.
The total supply of a particular fungible token, or a group of related non-fungible tokens.
The exact definition of a token pool is dependent on the token connector implementation.
Check the documentation for your chosen connector implementation to see the detailed options for configuring a new Token Pool.
Note that it is very common to use a Token Pool to teach Hyperledger FireFly about an existing token, so that you can start interacting with a token that already exists.
"},{"location":"reference/types/tokenpool/#example-token-pool-types","title":"Example token pool types","text":"Some examples of how the generic concept of a Token Pool maps to various well-defined Ethereum standards:
These are provided as examples only - a custom token connector could be backed by any token technology (Ethereum or otherwise) as long as it can support the basic operations described here (create pool, mint, burn, transfer). Other FireFly repos include a sample implementation of a token connector for ERC-20 and ERC-721 as well as ERC-1155.
"},{"location":"reference/types/tokenpool/#example","title":"Example","text":"{\n \"id\": \"90ebefdf-4230-48a5-9d07-c59751545859\",\n \"type\": \"fungible\",\n \"namespace\": \"ns1\",\n \"name\": \"my_token\",\n \"standard\": \"ERC-20\",\n \"locator\": \"address=0x056df1c53c3c00b0e13d37543f46930b42f71db0\\u0026schema=ERC20WithData\\u0026type=fungible\",\n \"decimals\": 18,\n \"connector\": \"erc20_erc721\",\n \"message\": \"43923040-b1e5-4164-aa20-47636c7177ee\",\n \"active\": true,\n \"created\": \"2022-05-16T01:23:15Z\",\n \"info\": {\n \"address\": \"0x056df1c53c3c00b0e13d37543f46930b42f71db0\",\n \"name\": \"pool8197\",\n \"schema\": \"ERC20WithData\"\n },\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"a23ffc87-81a2-4cbc-97d6-f53d320c36cd\"\n },\n \"published\": false\n}\n"},{"location":"reference/types/tokenpool/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the token pool UUID type The type of token the pool contains, such as fungible/non-fungible FFEnum:\"fungible\"\"nonfungible\" namespace The namespace for the token pool string name The name of the token pool. Note the name is not validated against the description of the token on the blockchain string networkName The published name of the token pool within the multiparty network string standard The ERC standard the token pool conforms to, as reported by the token connector string locator A unique identifier for the pool, as provided by the token connector string key The signing key used to create the token pool. On input for token connectors that support on-chain deployment of new tokens (vs. only index existing ones) this determines the signing key used to create the token on-chain string symbol The token symbol. If supplied on input for an existing on-chain token, this must match the on-chain information string decimals Number of decimal places that this token has int connector The name of the token connector, as specified in the FireFly core configuration file that is responsible for the token pool. Required on input when multiple token connectors are configured string message The UUID of the broadcast message used to inform the network about this pool UUID active Indicates whether the pool has been successfully activated with the token connector bool created The creation time of the pool FFTime config Input only field, with token connector specific configuration of the pool, such as an existing Ethereum address and block number to used to index the pool. See your chosen token connector documentation for details JSONObject info Token connector specific information about the pool. See your chosen token connector documentation for details JSONObject tx Reference to the FireFly transaction used to create and broadcast this pool to the network TransactionRef interface A reference to an existing FFI, containing pre-registered type information for the token contract FFIReference interfaceFormat The interface encoding format supported by the connector for this token pool FFEnum:\"abi\"\"ffi\" methods The method definitions resolved by the token connector to be used by each token operation JSONAny published Indicates if the token pool is published to other members of the multiparty network bool"},{"location":"reference/types/tokenpool/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/tokenpool/#ffireference","title":"FFIReference","text":"Field Name Description Type id The UUID of the FireFly interface UUID name The name of the FireFly interface string version The version of the FireFly interface string"},{"location":"reference/types/tokentransfer/","title":"TokenTransfer","text":"A Token Transfer is created for each transfer of value that happens under a token pool.
The transfers form an off-chain audit history (an \"index\") of the transactions that have been performed on the blockchain.
This historical information cannot be queried directly from the blockchain for most token implementations, because it is inefficient to use the blockchain to store complex data structures like this. So the blockchain simply emits events when state changes, and if you want to be able to query this historical information you need to track it in your own off-chain database.
Hyperledger FireFly maintains this index automatically for all Token Pools that are configured.
"},{"location":"reference/types/tokentransfer/#firefly-initiated-vs-non-firefly-initiated-transfers","title":"FireFly initiated vs. non-FireFly initiated transfers","text":"There is no requirement at all to use FireFly to initiate transfers in Token Pools that Hyperledger FireFly is aware of. FireFly will listen to and update its audit history and balances for all transfers, regardless of whether they were initiated using a FireFly Supernode or not.
So you could for example use Metamask to initiate a transfer directly against an ERC-20/ERC-721 contract directly on your blockchain, and you will see it appear as a transfer. Or initiate a transfer on-chain via another Smart Contract, such as a Hashed Timelock Contract (HTLC) releasing funds held in digital escrow.
"},{"location":"reference/types/tokentransfer/#message-coordinated-transfers","title":"Message coordinated transfers","text":"One special feature enabled when using FireFly to initiate transfers, is to coordinate an off-chain data transfer (private or broadcast) with the on-chain transfer of value. This is a powerful tool to allow transfers to have rich metadata associated that is too sensitive (or too large) to include on the blockchain itself.
These transfers have a message associated with them, and require a compatible Token Connector and on-chain Smart Contract that allows a data payload to be included as part of the transfer, and to be emitted as part of the transfer event.
Examples of how to do this are included in the ERC-20, ERC-721 and ERC-1155 Token Connector sample smart contracts.
"},{"location":"reference/types/tokentransfer/#transfer-types","title":"Transfer types","text":"There are three primary types of transfer:
from address will be unset for these transfer types.to address will be unset for these transfer types.from and to addresses are both set for these type of transfers.Note that the key that signed the Transfer transaction might be different to the from account that is the owner of the tokens before the transfer.
The Approval resource is used to track which signing accounts (other than the owner) have approval to transfer tokens on the owner's behalf.
"},{"location":"reference/types/tokentransfer/#example","title":"Example","text":"{\n \"type\": \"transfer\",\n \"pool\": \"1244ecbe-5862-41c3-99ec-4666a18b9dd5\",\n \"uri\": \"firefly://token/1\",\n \"connector\": \"erc20_erc721\",\n \"namespace\": \"ns1\",\n \"key\": \"0x55860105D6A675dBE6e4d83F67b834377Ba677AD\",\n \"from\": \"0x55860105D6A675dBE6e4d83F67b834377Ba677AD\",\n \"to\": \"0x55860105D6A675dBE6e4d83F67b834377Ba677AD\",\n \"amount\": \"1000000000000000000\",\n \"protocolId\": \"000000000041/000000/000000\",\n \"message\": \"780b9b90-e3b0-4510-afac-b4b1f2940b36\",\n \"messageHash\": \"780204e634364c42779920eddc8d9fecccb33e3607eeac9f53abd1b31184ae4e\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"62767ca8-99f9-439c-9deb-d80c6672c158\"\n },\n \"blockchainEvent\": \"b57fcaa2-156e-4c3f-9b0b-ddec9ee25933\"\n}\n"},{"location":"reference/types/tokentransfer/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type The type of transfer such as mint/burn/transfer FFEnum:\"mint\"\"burn\"\"transfer\" localId The UUID of this token transfer, in the local FireFly node UUID pool The UUID the token pool this transfer applies to UUID tokenIndex The index of the token within the pool that this transfer applies to string uri The URI of the token this transfer applies to string connector The name of the token connector, as specified in the FireFly core configuration file. Required on input when there are more than one token connectors configured string namespace The namespace for the transfer, which must match the namespace of the token pool string key The blockchain signing key for the transfer. On input defaults to the first signing key of the organization that operates the node string from The source account for the transfer. On input defaults to the value of 'key' string to The target account for the transfer. On input defaults to the value of 'key' string amount The amount for the transfer. For non-fungible tokens will always be 1. For fungible tokens, the number of decimals for the token pool should be considered when inputting the amount. For example, with 18 decimals a fractional balance of 10.234 will be specified as 10,234,000,000,000,000,000 FFBigInt protocolId An alphanumerically sortable string that represents this event uniquely with respect to the blockchain string message The UUID of a message that has been correlated with this transfer using the data field of the transfer in a compatible token connector UUID messageHash The hash of a message that has been correlated with this transfer using the data field of the transfer in a compatible token connector Bytes32 created The creation time of the transfer FFTime tx If submitted via FireFly, this will reference the UUID of the FireFly transaction (if the token connector in use supports attaching data) TransactionRef blockchainEvent The UUID of the blockchain event UUID config Input only field, with token connector specific configuration of the transfer. See your chosen token connector documentation for details JSONObject"},{"location":"reference/types/tokentransfer/#transactionref","title":"TransactionRef","text":"Field Name Description Type type The type of the FireFly transaction FFEnum: id The UUID of the FireFly transaction UUID"},{"location":"reference/types/transaction/","title":"Transaction","text":"FireFly Transactions are a grouping construct for a number of Operations and Events that need to complete or fail as unit.
FireFly Transactions are not themselves Blockchain transactions, but in many cases there is exactly one Blockchain transaction associated with each FireFly transaction. Exceptions include unpinned transactions, where there is no blockchain transaction at all.
The Blockchain native transaction ID is stored in the FireFly transaction object when it is known. However, the FireFly transaction starts before a Blockchain transaction exists - because reliably submitting the blockchain transaction is one of the operations that is performed inside of the FireFly transaction.
The below screenshot from the FireFly Explorer nicely illustrates how multiple operations and events are associated with a FireFly transaction. In this example, the transaction tracking is pinning of a batch of messages stored in IPFS to the blockchain.
So there is a Blockchain ID for the transaction - as there is just one Blockchain transaction regardless of how many messages in the batch. There are operations for the submission of that transaction, and the upload of the data to IPFS. Then a corresponding Blockchain Event Received event for the detection of the event from the blockchain smart contract when the transaction was mined, and a Message Confirmed event for each message in the batch (in this case 1). Then here the message was a special Definition message that advertised a new Contract API to all members of the network - so there is a Contract API Confirmed event as well.
Each FireFly transaction has a UUID. This UUID is propagated through to all participants in a FireFly transaction. For example in a Token Transfer that is coordinated with an off-chain private Message, the transaction ID is propagated to all parties who are part of that transaction. So the same UUID can be used to find the transaction in the FireFly Explorer of any member who has access to the message. This is possible because hash-pinned off-chain data is associated with that on-chain transfer.
However, in the case of a raw ERC-20/ERC-721 transfer (without data), or any other raw Blockchain transaction, the FireFly transaction UUID cannot be propagated - so it will be local on the node that initiated the transaction.
"},{"location":"reference/types/transaction/#example","title":"Example","text":"{\n \"id\": \"4e7e0943-4230-4f67-89b6-181adf471edc\",\n \"namespace\": \"ns1\",\n \"type\": \"contract_invoke\",\n \"created\": \"2022-05-16T01:23:15Z\",\n \"blockchainIds\": [\n \"0x34b0327567fefed09ac7b4429549bc609302b08a9cbd8f019a078ec44447593d\"\n ]\n}\n"},{"location":"reference/types/transaction/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type id The UUID of the FireFly transaction UUID namespace The namespace of the FireFly transaction string type The type of the FireFly transaction FFEnum:\"none\"\"unpinned\"\"batch_pin\"\"network_action\"\"token_pool\"\"token_transfer\"\"contract_deploy\"\"contract_invoke\"\"contract_invoke_pin\"\"token_approval\"\"data_publish\" created The time the transaction was created on this node. Note the transaction is individually created with the same UUID on each participant in the FireFly transaction FFTime idempotencyKey An optional unique identifier for a transaction. Cannot be duplicated within a namespace, thus allowing idempotent submission of transactions to the API IdempotencyKey blockchainIds The blockchain transaction ID, in the format specific to the blockchain involved in the transaction. Not all FireFly transactions include a blockchain. FireFly transactions are extensible to support multiple blockchain transactions string[]"},{"location":"reference/types/verifier/","title":"Verifier","text":"A verifier is a cryptographic verification mechanism for an identity in FireFly.
FireFly generally defers verification of these keys to the lower layers of technologies in the stack - the blockchain (Fabric, Ethereum etc.) or Data Exchange technology.
As such the details of the public key cryptography scheme are not represented in the FireFly verifiers. Only the string identifier of the verifier that is appropriate to the technology.
{\n \"hash\": \"6818c41093590b862b781082d4df5d4abda6d2a4b71d737779edf6d2375d810b\",\n \"identity\": \"114f5857-9983-46fb-b1fc-8c8f0a20846c\",\n \"type\": \"ethereum_address\",\n \"value\": \"0x30017fd084715e41aa6536ab777a8f3a2b11a5a1\",\n \"created\": \"2022-05-16T01:23:15Z\"\n}\n"},{"location":"reference/types/verifier/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type hash Hash used as a globally consistent identifier for this namespace + type + value combination on every node in the network Bytes32 identity The UUID of the parent identity that has claimed this verifier UUID namespace The namespace of the verifier string type The type of the verifier FFEnum:\"ethereum_address\"\"tezos_address\"\"fabric_msp_id\"\"dx_peer_id\" value The verifier string, such as an Ethereum address, or Fabric MSP identifier string created The time this verifier was created on this node FFTime"},{"location":"reference/types/wsack/","title":"WSAck","text":"An ack must be sent on a WebSocket for each event delivered to an application.
Unless autoack is set in the WSStart payload/URL parameters to cause automatic acknowledgement.
Your application should specify the id of each event that it acknowledges.
If the id is omitted, then FireFly will assume the oldest message delivered to the application that has not been acknowledged is the one the ack is associated with.
If multiple subscriptions are started on a WebSocket, then you need to specify the subscription namespace+name as part of each ack.
If you send an acknowledgement that cannot be correlated, then a WSError payload will be sent to the application.
"},{"location":"reference/types/wsack/#example","title":"Example","text":"{\n \"type\": \"ack\",\n \"id\": \"f78bf82b-1292-4c86-8a08-e53d855f1a64\",\n \"subscription\": {\n \"namespace\": \"ns1\",\n \"name\": \"app1_subscription\"\n }\n}\n"},{"location":"reference/types/wsack/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type WSActionBase.type FFEnum:\"start\"\"ack\"\"protocol_error\"\"event_batch\" id WSAck.id UUID subscription WSAck.subscription SubscriptionRef"},{"location":"reference/types/wsack/#subscriptionref","title":"SubscriptionRef","text":"Field Name Description Type id The UUID of the subscription UUID namespace The namespace of the subscription. A subscription will only receive events generated in the namespace of the subscription string name The name of the subscription. The application specifies this name when it connects, in order to attach to the subscription and receive events that arrived while it was disconnected. If multiple apps connect to the same subscription, events are workload balanced across the connected application instances string"},{"location":"reference/types/wserror/","title":"WSError","text":""},{"location":"reference/types/wserror/#example","title":"Example","text":"{\n \"type\": \"protocol_error\",\n \"error\": \"FF10175: Acknowledgment does not match an inflight event + subscription\"\n}\n"},{"location":"reference/types/wserror/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type WSAck.type FFEnum:\"start\"\"ack\"\"protocol_error\"\"event_batch\" error WSAck.error string"},{"location":"reference/types/wsstart/","title":"WSStart","text":"The start payload is sent after an application connects to a WebSocket, to start delivery of events over that connection.
The start command can refer to a subscription by name in order to reliably receive all matching events for that subscription, including those that were emitted when the application was disconnected.
Alternatively the start command can request \"ephemeral\": true in order to dynamically create a new subscription that lasts only for the duration that the connection is active.
{\n \"type\": \"start\",\n \"autoack\": false,\n \"namespace\": \"ns1\",\n \"name\": \"app1_subscription\",\n \"ephemeral\": false,\n \"filter\": {\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {}\n },\n \"options\": {}\n}\n"},{"location":"reference/types/wsstart/#field-descriptions","title":"Field Descriptions","text":"Field Name Description Type type WSActionBase.type FFEnum:\"start\"\"ack\"\"protocol_error\"\"event_batch\" autoack WSStart.autoack bool namespace WSStart.namespace string name WSStart.name string ephemeral WSStart.ephemeral bool filter WSStart.filter SubscriptionFilter options WSStart.options SubscriptionOptions"},{"location":"reference/types/wsstart/#subscriptionfilter","title":"SubscriptionFilter","text":"Field Name Description Type events Regular expression to apply to the event type, to subscribe to a subset of event types string message Filters specific to message events. If an event is not a message event, these filters are ignored MessageFilter transaction Filters specific to events with a transaction. If an event is not associated with a transaction, this filter is ignored TransactionFilter blockchainevent Filters specific to blockchain events. If an event is not a blockchain event, these filters are ignored BlockchainEventFilter topic Regular expression to apply to the topic of the event, to subscribe to a subset of topics. Note for messages sent with multiple topics, a separate event is emitted for each topic string topics Deprecated: Please use 'topic' instead string tag Deprecated: Please use 'message.tag' instead string group Deprecated: Please use 'message.group' instead string author Deprecated: Please use 'message.author' instead string"},{"location":"reference/types/wsstart/#messagefilter","title":"MessageFilter","text":"Field Name Description Type tag Regular expression to apply to the message 'header.tag' field string group Regular expression to apply to the message 'header.group' field string author Regular expression to apply to the message 'header.author' field string"},{"location":"reference/types/wsstart/#transactionfilter","title":"TransactionFilter","text":"Field Name Description Type type Regular expression to apply to the transaction 'type' field string"},{"location":"reference/types/wsstart/#blockchaineventfilter","title":"BlockchainEventFilter","text":"Field Name Description Type name Regular expression to apply to the blockchain event 'name' field, which is the name of the event in the underlying blockchain smart contract string listener Regular expression to apply to the blockchain event 'listener' field, which is the UUID of the event listener. So you can restrict your subscription to certain blockchain listeners. Alternatively to avoid your application need to know listener UUIDs you can set the 'topic' field of blockchain event listeners, and use a topic filter on your subscriptions string"},{"location":"reference/types/wsstart/#subscriptionoptions","title":"SubscriptionOptions","text":"Field Name Description Type firstEvent Whether your application would like to receive events from the 'oldest' event emitted by your FireFly node (from the beginning of time), or the 'newest' event (from now), or a specific event sequence. Default is 'newest' SubOptsFirstEvent readAhead The number of events to stream ahead to your application, while waiting for confirmation of consumption of those events. At least once delivery semantics are used in FireFly, so if your application crashes/reconnects this is the maximum number of events you would expect to be redelivered after it restarts uint withData Whether message events delivered over the subscription, should be packaged with the full data of those messages in-line as part of the event JSON payload. Or if the application should make separate REST calls to download that data. May not be supported on some transports. bool batch Events are delivered in batches in an ordered array. The batch size is capped to the readAhead limit. The event payload is always an array even if there is a single event in the batch, allowing client-side optimizations when processing the events in a group. Available for both Webhooks and WebSockets. bool batchTimeout When batching is enabled, the optional timeout to send events even when the batch hasn't filled. string fastack Webhooks only: When true the event will be acknowledged before the webhook is invoked, allowing parallel invocations bool url Webhooks only: HTTP url to invoke. Can be relative if a base URL is set in the webhook plugin config string method Webhooks only: HTTP method to invoke. Default=POST string json Webhooks only: Whether to assume the response body is JSON, regardless of the returned Content-Type bool reply Webhooks only: Whether to automatically send a reply event, using the body returned by the webhook bool replytag Webhooks only: The tag to set on the reply message string replytx Webhooks only: The transaction type to set on the reply message string headers Webhooks only: Static headers to set on the webhook request `` query Webhooks only: Static query params to set on the webhook request `` tlsConfigName The name of an existing TLS configuration associated to the namespace to use string input Webhooks only: A set of options to extract data from the first JSON input data in the incoming message. Only applies if withData=true WebhookInputOptions retry Webhooks only: a set of options for retrying the webhook call WebhookRetryOptions httpOptions Webhooks only: a set of options for HTTP WebhookHTTPOptions"},{"location":"reference/types/wsstart/#webhookinputoptions","title":"WebhookInputOptions","text":"Field Name Description Type query A top-level property of the first data input, to use for query parameters string headers A top-level property of the first data input, to use for headers string body A top-level property of the first data input, to use for the request body. Default is the whole first body string path A top-level property of the first data input, to use for a path to append with escaping to the webhook path string replytx A top-level property of the first data input, to use to dynamically set whether to pin the response (so the requester can choose) string"},{"location":"reference/types/wsstart/#webhookretryoptions","title":"WebhookRetryOptions","text":"Field Name Description Type enabled Enables retry on HTTP calls, defaults to false bool count Number of times to retry the webhook call in case of failure int initialDelay Initial delay between retries when we retry the webhook call string maxDelay Max delay between retries when we retry the webhookcall string"},{"location":"reference/types/wsstart/#webhookhttpoptions","title":"WebhookHTTPOptions","text":"Field Name Description Type proxyURL HTTP proxy URL to use for outbound requests to the webhook string tlsHandshakeTimeout The max duration to hold a TLS handshake alive string requestTimeout The max duration to hold a TLS handshake alive string maxIdleConns The max number of idle connections to hold pooled int idleTimeout The max duration to hold a HTTP keepalive connection between calls string connectionTimeout The maximum amount of time that a connection is allowed to remain with no data transmitted. string expectContinueTimeout See ExpectContinueTimeout in the Go docs string"},{"location":"releasenotes/","title":"Release Notes","text":"Full release notes
"},{"location":"releasenotes/#v133-mar-25-2025","title":"v1.3.3 - Mar 25, 2025","text":"What's New:
ack based reliable receipt stream (or other checkpoint system)ff_multiparty_node_identity_dx_mismatch notify that the certificate in FireFly Core is different to the one stored in Data Exchangeff_multiparty_node_identity_dx_expiry_epoch emit the timestamp of the certificate of Data Exchange useful for SREs to monitor before it expires firefly-commonmetrics server to host additional routes such as status endpoints monitoring to be more appropriate than metrics which has now be deprecated. Pending on re-delivery of a batch over the networkAs part of the changes to the metrics to add the new namespace label, we changed from using a Prometheus Counter to a CounterVec. As a result there is no default value of 0 on the counter, which means when users query for a specific metric such as ff_message_rejected_total it will not be available until the CounterVec associated with that metric is incremented. This has been determined to be an easy upgrade for SRE monitoring these metrics, hence inclusion in a patch release.
What's New:
2^53-1What's New:
/status/multipartyMigration guide
What's New:
Migration guide
What's New:
X-FireFly-Request-ID HTTP header is now passed through to FireFly dependency microservicesMigration guide
What's New:
What's New:
What's New:
What's New:
This release includes lots of major hardening, performance improvements, and bug fixes, as well as more complete documentation and OpenAPI specifications.
What's New:
What's New:
What's New:
What's New:
What's New:
Hyperledger FireFly v1.1.0 is a feature release that includes significant new functionality around namespaces and plugins, as detailed in FIR-12. As a result, upgrading an existing FireFly environment from any prior release may require special steps (depending on the functionality used).
If seamless data preservation is not required, you can simply create a new network from scratch using FireFly v1.1.0.
If you want to preserve data from an existing 1.0.x network, significant care has been taken to ensure that it is possible. Most existing environments can be upgraded with minimal extra steps. This document attempts to call out all potentially breaking changes (both common and uncommon), so that you can easily assess the impact of the upgrade and any needed preparation before proceeding.
"},{"location":"releasenotes/1.1_migration_guide/#before-upgrading","title":"Before Upgrading","text":"These steps are all safe to do while running FireFly v1.0.x. While they do not have to be done prior to upgrading, performing them ahead of time may allow you to preemptively fix some problems and ease the migration to v1.1.0.
"},{"location":"releasenotes/1.1_migration_guide/#common-steps","title":"Common Steps","text":"Upgrade to latest v1.0.x patch release
Before upgrading to v1.1.0, it is strongly recommended to upgrade to the latest v1.0.x patch release (v1.0.4 as of the writing this document). Do not proceed any further in this guide until all nodes are successfully running the latest patch release version.
Fix any deprecated config usage
All items in FireFly's YAML config that were deprecated at any time in the v1.0.x line will be unsupported in v1.1.0. After upgrading to the latest v1.0.x patch release, you should therefore look for any deprecation warnings when starting FireFly, and ensure they are fixed before upgrading to v1.1.0. Failure to do so will cause your config file to be rejected in v1.1.0, and FireFly will fail to start.
You can utilize the ffconfig tool to automatically check and fix deprecated config with a command such as:
ffconfig migrate -f <input-file> -o <output-file> --to 1.0.4\n This should ensure your config file is acceptable to 1.0.x or 1.1.x.
Note that if you are attempting to migrate a Dockerized development environment (such as one stood up by the firefly-cli), you may need to edit the config file inside the Docker. Environments created by a v1.0.x CLI do not expose the config file outside the Docker container.
"},{"location":"releasenotes/1.1_migration_guide/#less-common-situations","title":"Less Common Situations","text":"Record all broadcast namespaces in the config file
Expand for migration details only if your application uses non-default namespaces. FireFly v1.0 allowed for the dynamic creation of new namespaces by broadcasting a namespace definition to all nodes. This functionality is _removed_ in v1.1.0. If your network relies on any namespaces that were created via a broadcast, you must add those namespaces to the `namespaces.predefined` list in your YAML config prior to upgrade. If you do not, they will cease to function after upgrading to v1.1.0 (all events on those namespaces will be ignored by your node).Identify queries for organization/node identities
Expand for migration details only if your application queries/network/organizations or /network/nodes. Applications that query `/network/organizations` or `/network/nodes` will temporarily receive _empty result lists_ after upgrading to v1.1.0, just until all identities have been re-registered (see steps in \"After Upgrading\"). This is because organization and node identities were broadcast on a global \"ff_system\" namespace in v1.0, but are no longer global in v1.1.0. The simplest solution is to shut down applications until the FireFly upgrade is complete on all nodes and all identities have been re-broadcast. If this poses a problem and you require zero downtime from these APIs, you can proactively mitigate with the following steps in your application code: - Applications that query the `/network/organizations` may be altered to _also_ query `/namespaces/ff_system/network/organizations` and combine the results (but should disregard the second query if it fails). - Applications that query the `/network/nodes` may be altered to _also_ query `/namespaces/ff_system/network/nodes` and combine the results (but should disregard the second query if it fails). Further details on the changes to `/network` APIs are provided in the next section. Identify usage of changed APIs
Expand for migration details on all changes to/namespaces, /status, and /network APIs. The primary API change in this version is that the \"global\" paths beginning with `/network` and `/status` have been relocated under the `/namespaces/{ns}` prefix, as this data is now specific to a namespace instead of being global. At the same time, the API server has been enhanced so that omitting a namespace from an API path will _query the default namespace_ instead. That is, querying `/messages` is now the same as querying `/namespaces/default/messages` (assuming your default namespace is named \"default\"). This has the effect that most of the moved APIs will continue to function without requiring changes. See below for details on the affected paths. These global routes have been moved under `/namespaces/{ns}`. Continuing to use them without the namespace prefix **will still work**, and will simply query the default namespace. /network/diddocs/{did}\n/network/nodes\n/network/nodes/{nameOrId}\n/network/nodes/self\n/network/organizations\n/network/organizations/{nameOrId}\n/network/organizations/self\n/status\n/status/batchmanager\n These global routes have been moved under `/namespaces/{ns}` and have also been deprecated in favor of a new route name. Continuing to use them without the namespace prefix **will still work**, and will simply query the default namespace. However, it is recommended to switch to the new API spelling when possible. /network/identities - replaced by existing /namespaces/{ns}/identities\n/network/identities/{did} - replaced by new /namespaces/{ns}/identities/{did}\n These global routes have been have been permanently renamed. They are deemed less likely to be used by client applications, but any usage **will be broken** by this release and must be changed after upgrading. /status/pins - moved to /namespaces/{ns}/pins (or /pins to query the default namespace)\n/status/websockets - moved to /websockets\n The response bodies of the following APIs have also had fields removed. Any usage of the removed fields **will be broken** by this release and must be changed after upgrading. /namespaces - removed all fields except \"name\", \"description\", \"created\"\n/namespaces/{ns} - same as above\n/namespaces/{ns}/status - removed \"defaults\"\n Adjust or remove usage of admin APIs
Expand for migration details on all changes to/admin and /spi. FireFly provides an administrative API in addition to the normal API. In v1.1.0, this has been renamed to SPI (Service Provider Interface). Consequently, all of the routes have moved from `/admin` to `/spi`, and the config section has been renamed from `admin` to `spi`. There is no automatic migration provided, so any usage of the old routes will need to be changed, and your config file will need to be adjusted if you wish to keep the SPI enabled (although it is perfectly fine to have both `admin` and `spi` sections if needed for migration). The ability to set FireFly config via these routes has also been removed. Any usage of the `/admin/config` routes must be discontinued, and config should be set exclusively by editing the FireFly config file. The only route retained from this functionality was `/admin/config/reset`, which has been renamed to `/spi/reset` - this will continue to be available for performing a soft reset that reloads FireFly's config."},{"location":"releasenotes/1.1_migration_guide/#performing-the-upgrade","title":"Performing the Upgrade","text":"Backup current data
Before beginning the upgrade, it is recommended to take a full backup of your FireFly database(s). If you encounter any serious issues after the upgrade, you should revert to the old binary and restore your database snapshot. While down-migrations are provided to revert a database in place, they are not guaranteed to work in all scenarios.
Upgrade FireFly and all dependencies
Bring FireFly down and replace it with the new v1.1.0 binary. You should also replace other runtimes (such as blockchain, data exchange, and token connectors) with the supported versions noted in the v1.1.0 release. Once all binaries have been replaced, start them up again.
"},{"location":"releasenotes/1.1_migration_guide/#after-upgrading","title":"After Upgrading","text":"Ensure nodes start without errors
Ensure that FireFly starts without errors. There will likely be new deprecation warnings for config that was deprecated in v1.1.0, but these are safe to ignore for the moment. If you face any errors or crashes, please report the logs to the FireFly channel on Discord, and return your nodes to running the previous version of FireFly if necessary.
Re-broadcast organization and node identities
Once all nodes in the multiparty network have been upgraded and are running without errors, each node should re-broadcast its org and node identity by invoking /network/organizations/self and /network/nodes/self (or, if your application uses a non-default namespace, by invoking the /namespace/{ns}-prefixed versions of these APIs).
This will ensure that queries to /network/organizations and /network/nodes return the expected results, and will register the identities in a way that can be supported by both V1 and V2 multiparty contracts (see \"Upgrading the Multi-Party Contract\").
Update config file to latest format
Once the network is stable, you should update your config file(s) again to remove deprecated configuration and set yourself up to take advantage of all the new configuration options available in v1.1.0.
You can utilize the ffconfig tool to automatically check and fix deprecated config with a command such as:
ffconfig migrate -f <input-file> -o <output-file>\n"},{"location":"releasenotes/1.1_migration_guide/#upgrading-the-multi-party-contract","title":"Upgrading the Multi-Party Contract","text":"FireFly v1.1.0 includes a new recommended version of the contract used for multi-party systems (for both Ethereum and Fabric). It also introduces a versioning method for this contract, and a path for migrating networks from one contract address to a new one.
After upgrading FireFly itself, it is recommended to upgrade your multi-party system to the latest contract version by following these steps.
ff deploy or a similar method.namespaces:\n predefined:\n - name: default\n multiparty:\n enabled: true\n contract:\n - location:\n address: 0x09f107d670b2e69a700a4d9ef1687490ae1568db\n - location:\n address: 0x1bee32b37dc48e99c6b6bf037982eb3bee0e816b\n This example assumes 0x09f1... represents the address of the original contract, and 0x1bee... represents the new one. Note that if you have multiple namespaces, you must repeat this step for each namespace in the config - and you must deploy a unique contract instance per namespace (in the new network rules, multiple namespaces cannot share a single contract).
/namespaces/{ns}/network/action FireFly API with a body of {\"type\": \"terminate\"}. This will terminate the old contract and instruct all members to move simultaneously to the newly configured one./namespaces/{ns}/status on each node and checking that the active multi-party contract matches the new address.Hyperledger FireFly v1.2.0 is a feature release that includes new features for tokens and data management as well as enhancements for debugging FireFly apps and operating FireFly nodes.
For the most part, upgrading from v1.1.x to v.1.2.0 should be a seamless experience, but there are several important things to note about changes between the two versions, which are described in detail on this page.
"},{"location":"releasenotes/1.2_migration_guide/#tokens-considerations","title":"Tokens considerations","text":"There are quite a few new features around tokens in FireFly v1.2.0. Most notably, FireFly's token APIs now work with a much wider variety of ERC-20, ERC-721, and ERC-1155 contracts, supporting variations of these contracts generated by the OpenZepplin Contract Wizard.
"},{"location":"releasenotes/1.2_migration_guide/#sample-token-contract-deprecations","title":"Sample token contract deprecations","text":"In FireFly v1.2.0 two of the old, lesser used sample token contracts have been deprecated. The ERC20NoData and ERC721NoData contracts have been updated and the previous versions are no longer supported, unless you set the USE_LEGACY_ERC20_SAMPLE=true or USE_LEGACY_ERC721_SAMPLE=true environment variables for your token connector.
For more details you can read the description of the pull requests (#104 and #109) where these changes were made.
"},{"location":"releasenotes/1.2_migration_guide/#differences-from-v110","title":"Differences from v1.1.0","text":""},{"location":"releasenotes/1.2_migration_guide/#optional-fields","title":"Optional fields","text":"Some token connectors support some optional fields when using them with certain contracts. For example, the ERC-721 token connector supports a URI field. If these optional fields are specified in an API call to a token connector and contract that does not support that field, an error will be returned, rather than the field being silently ignored.
"},{"location":"releasenotes/1.2_migration_guide/#auto-incrementing-token-index","title":"Auto incrementing token index","text":"In FireFly v1.2.0 the default ERC-721 and ERC-1155 contracts have changed to automatically increment the token index when a token is minted. This is useful when many tokens may be minted around the same time, or by different minters. This lets the blockchain handle the ordering, and keeping track of the state of which token index should be minted next, rather than making that an application concern.
NOTE: These new contracts will only be used for brand new FireFly stacks with v1.2.0. If you have an existing stack, the new token contracts will not be used, unless you specifically deploy them and start using them.
"},{"location":"releasenotes/1.2_migration_guide/#data-management-considerations","title":"Data management considerations","text":"FireFly v1.2.0 introduces the ability to delete data records and their associated blobs, if present. This will remove the data and blob rows from the FireFly database, as well as removing the blob from the Data Exchange microservice. This can be very useful if your organization has data retention requirements for sensitive, private data and needs to purge data after a certain period of time.
Please note that this API only removes data from the FireFly node on which it is called. If data has been shared with other participants of a multi-party network, it is each participants' responsibility to satisfy their own data retention policies.
"},{"location":"releasenotes/1.2_migration_guide/#differences-from-v110_1","title":"Differences from v1.1.0","text":"It is important to note that FireFly now stores a separate copy of a blob for a given payload, even if the same data object is sent in different messages, by different network participants. Previously, in FireFly v1.1.0 the blob was de-duplicated in some cases. In FireFly v1.2.0, deleting the data object will result in each copy of the associated payload being removed.
NOTE: If data has been published to IPFS, it cannot be deleted completely. You can still call the DELETE method on it, and it will be removed from FireFly's database and Data Exchange, but the payload will still persist in IPFS.
Please see the optional token fields section above for details. If your application code is calling any token API endpoints with optional fields that are not supported by your token connector or contract, you will need to remove those fields from your API request or it will fail.
"},{"location":"releasenotes/1.2_migration_guide/#transaction-output-details","title":"Transaction output details","text":"In previous versions of FireFly, transaction output details used to appear under the output object in the response body. Behind the scenes, some of this data is now fetched from the blockchain connector asynchronously. If your application needs the detailed output, it should now add a fetchStatus=true query parameter when querying for an Operation. Additionally the details have moved from the output field to a new detail field on the response body. For more details, please refer to the PRs where this change was made (#1111 and #1151). For a detailed example comparing what an Operation response body looks like in FireFly v1.2.0 compared with v1.1.x, you can expand the sections below.
\n{\n \"id\": \"2b0ec132-2abd-40f0-aa56-79871a7a23b9\",\n \"namespace\": \"default\",\n \"tx\": \"cb0e6de1-50a9-44f2-a2ff-411f6dcc19c9\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n \"idempotencyKey\": \"5a634941-29cb-4a4b-b5a7-196331723d6d\",\n \"input\": {\n \"newValue\": 42\n },\n \"interface\": \"46189886-cae5-42ff-bf09-25d4f58d649e\",\n \"key\": \"0x2ecd8d5d97fb4bb7af0fbc27d7b89fd6f0366350\",\n \"location\": {\n \"address\": \"0x9d7ea8561d4b21cba495d1bd29a6d3421c31cf8f\"\n },\n \"method\": {\n \"description\": \"\",\n \"id\": \"d1d2a0cf-19ea-42c3-89b8-cb65850fb9c5\",\n \"interface\": \"46189886-cae5-42ff-bf09-25d4f58d649e\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"details\": {\n \"type\": \"uint256\"\n },\n \"type\": \"integer\"\n }\n }\n ],\n \"pathname\": \"set\",\n \"returns\": []\n },\n \"methodPath\": \"set\",\n \"options\": null,\n \"type\": \"invoke\"\n },\n \"output\": {\n \"Headers\": {\n \"requestId\": \"default:2b0ec132-2abd-40f0-aa56-79871a7a23b9\",\n \"type\": \"TransactionSuccess\"\n },\n \"protocolId\": \"000000000052/000000\",\n \"transactionHash\": \"0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\"\n },\n \"created\": \"2023-01-24T14:08:17.371587084Z\",\n \"updated\": \"2023-01-24T14:08:17.385558417Z\",\n \"detail\": {\n \"created\": \"2023-01-24T14:08:17.378147625Z\",\n \"firstSubmit\": \"2023-01-24T14:08:17.381787042Z\",\n \"gas\": \"42264\",\n \"gasPrice\": 0,\n \"history\": [\n {\n \"count\": 1,\n \"info\": \"Success=true,Receipt=000000000052/000000,Confirmations=0,Hash=0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\",\n \"lastOccurrence\": null,\n \"time\": \"2023-01-24T14:08:17.384371042Z\"\n },\n {\n \"count\": 1,\n \"info\": \"Submitted=true,Receipt=,Hash=0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\",\n \"lastOccurrence\": null,\n \"time\": \"2023-01-24T14:08:17.381908959Z\"\n }\n ],\n \"id\": \"default:2b0ec132-2abd-40f0-aa56-79871a7a23b9\",\n \"lastSubmit\": \"2023-01-24T14:08:17.381787042Z\",\n \"nonce\": \"34\",\n \"policyInfo\": null,\n \"receipt\": {\n \"blockHash\": \"0x7a2ca7cc57fe1eb4ead3e60d3030b123667d18eb67f4b390fb0f51f970f1fba0\",\n \"blockNumber\": \"52\",\n \"extraInfo\": {\n \"contractAddress\": null,\n \"cumulativeGasUsed\": \"28176\",\n \"from\": \"0x2ecd8d5d97fb4bb7af0fbc27d7b89fd6f0366350\",\n \"gasUsed\": \"28176\",\n \"status\": \"1\",\n \"to\": \"0x9d7ea8561d4b21cba495d1bd29a6d3421c31cf8f\"\n },\n \"protocolId\": \"000000000052/000000\",\n \"success\": true,\n \"transactionIndex\": \"0\"\n },\n \"sequenceId\": \"0185e41b-ade2-67e4-c104-5ff553135320\",\n \"status\": \"Succeeded\",\n \"transactionData\": \"0x60fe47b1000000000000000000000000000000000000000000000000000000000000002a\",\n \"transactionHash\": \"0x9adae77a46bf869ee97aab38bb5d789fa2496209500801e87bf9e2cce945dc71\",\n \"transactionHeaders\": {\n \"from\": \"0x2ecd8d5d97fb4bb7af0fbc27d7b89fd6f0366350\",\n \"to\": \"0x9d7ea8561d4b21cba495d1bd29a6d3421c31cf8f\"\n },\n \"updated\": \"2023-01-24T14:08:17.384371042Z\"\n }\n}\n v1.1.x Operation response body \n{\n \"id\": \"4a1a19cf-7fd2-43f1-8fae-1e3d5774cf0d\",\n \"namespace\": \"default\",\n \"tx\": \"2978a248-f5df-4c78-bf04-711ab9c79f3d\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n \"idempotencyKey\": \"5dc2ee8a-be5c-4e60-995f-9e21818a441d\",\n \"input\": {\n \"newValue\": 42\n },\n \"interface\": \"752af5a3-d383-4952-88a9-b32b837ed1cb\",\n \"key\": \"0xd8a27cb390fd4f446acce01eb282c7808ec52572\",\n \"location\": {\n \"address\": \"0x7c0a598252183999754c53d97659af9436293b82\"\n },\n \"method\": {\n \"description\": \"\",\n \"id\": \"1739f25d-ab48-4534-b278-58c4cf151bf9\",\n \"interface\": \"752af5a3-d383-4952-88a9-b32b837ed1cb\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"details\": {\n \"type\": \"uint256\"\n },\n \"type\": \"integer\"\n }\n }\n ],\n \"pathname\": \"set\",\n \"returns\": []\n },\n \"methodPath\": \"set\",\n \"options\": null,\n \"type\": \"invoke\"\n },\n \"output\": {\n \"_id\": \"default:4a1a19cf-7fd2-43f1-8fae-1e3d5774cf0d\",\n \"blockHash\": \"0x13660667b69f48646025a87db603abdeeaa88036e9a1252b1af4ec1fc3e1d850\",\n \"blockNumber\": \"52\",\n \"cumulativeGasUsed\": \"28176\",\n \"from\": \"0xd8a27cb390fd4f446acce01eb282c7808ec52572\",\n \"gasUsed\": \"28176\",\n \"headers\": {\n \"id\": \"8dfaabd1-4493-4a64-52dd-762497022ba2\",\n \"requestId\": \"default:4a1a19cf-7fd2-43f1-8fae-1e3d5774cf0d\",\n \"requestOffset\": \"\",\n \"timeElapsed\": 0.109499833,\n \"timeReceived\": \"2023-01-24T17:16:52.372449013Z\",\n \"type\": \"TransactionSuccess\"\n },\n \"nonce\": \"0\",\n \"receivedAt\": 1674580612482,\n \"status\": \"1\",\n \"to\": \"0x7c0a598252183999754c53d97659af9436293b82\",\n \"transactionHash\": \"0x522e5aac000f5befba61ddfd707aaf5c61314f47e00cd0c5b779f69dd14bd899\",\n \"transactionIndex\": \"0\"\n },\n \"created\": \"2023-01-24T17:16:52.368498346Z\",\n \"updated\": \"2023-01-24T17:16:52.48408293Z\"\n}\n"},{"location":"releasenotes/1.2_migration_guide/#local-development-considerations","title":"Local development considerations","text":"It is also worth noting that the default Ethereum blockchain connector in the FireFly CLI is now Evmconnect. Ethconnect is still fully supported, but FireFly v1.2.0 marks a point of maturity in the project where it is now the recommended choice for any Ethereum based FireFly stack.
"},{"location":"releasenotes/1.3_migration_guide/","title":"v1.3.0 Migration Guide","text":""},{"location":"releasenotes/1.3_migration_guide/#overview","title":"Overview","text":"Hyperledger FireFly v1.3.0 is a feature release that includes changes around event streaming, contract listeners, define/publish APIs as well as a range of general fixes.
For the most part, upgrading from v1.2.x to v1.3.0 should be a seamless experience, but there are several important things to note about changes between the two versions, which are described in detail on this page.
"},{"location":"releasenotes/1.3_migration_guide/#docker-image-file-permission-considerations","title":"Docker image file permission considerations","text":"Following security best practices, the official published Docker images for FireFly Core and all of its microservices now run as a non-root user by default. If you are running a FireFly release prior to v1.3.0, depending on how you were running your containers, you may need to adjust file permissions inside volumes that these containers write to. If you have overridden the default user for your containers (for example though a Kubernetes deployment) you may safely ignore this section.
\u26a0\ufe0f Warning: If you have been using the default root user and upgrade to FireFly v1.3.0 without changing these file permissions your services may fail to start.
The new default user is 1001. If you are not overriding the user for your container, this user or group needs to have write permissions in several places. The list of services and directories you should specifically check are:
persistence.leveldb.path directory set in the config filerest.rest-gateway.openapi.storagePath directory in the config filerest.rest-gateway.openapi.eventsDB directory in the config filereceipts.leveldb.path directory in the config fileevents.leveldb.path directory in the config fileDATA_DIRECTORY environment variable (default /data)As of FireFly v1.3.0 in multi-party namespaces, by default, contract interfaces, contracts APIs, and token pools have distinct steps in their creation flow and by default they are unpublished.
These following described changes impact contract interfaces, contract APIs, and token pools.
Previously, when creating one of the affected resources in a multi-party network, if successful, the resource would be automatically broadcasted to other namespaces. In FireFly v1.3.0, this behaviour has changed, now when one of the resources is created there are 2 distinct states for the resource, published and unpublished. The default state for a resource (provided FireFly is not told otherwise) after creation is unpublished.
When a resource is unpublished it is not broadcasted to other namespaces in the multi-party network, and it is not pinned to the blockchain. In this state, it is possible to call the DELETE APIs to remove the resource (such as in the case where configuration needs to be changed) and reclaim the name that has been provided to it, so that it can be recreated.
When a resource is published it is broadcasted to other namespaces in the multi-party network, and it is pinned to the blockchain. In this state, it is no longer possible to call the DELETE APIs to remove the resource.
In FireFly v1.2.0 to create one of the affected resources and publish it to other parties, a POST call would be made to its respective API route and the broadcast would happen immediately. To achieve the same behaviour in FireFly v1.3.0, there are 2 options for all impacted resources, either providing a query parameter at creation to signal immediate publish, or a subsequent API call to publish the resources.
Previously, to create a contract interface a POST call would be made to /contracts/interfaces and the interface would be broadcasted to all other namepsaces. In FireFly v1.3.0, this same call can be made with the publish=true query parameter, or a subsequent API call can be made on an unpublished interface on POST /contracts/interfaces/{name}/{version}/publish specifying the name and version of the interface.
For an exact view of the changes to contract interfaces, see PR #1279.
"},{"location":"releasenotes/1.3_migration_guide/#contract-apis","title":"Contract APIs","text":"Previously, to create a contract API a POST call would be made to /apis and the API would be broadcasted to all other namepsaces. In FireFly v1.3.0, this same call can be made with the publish=true query parameter, or a subsequent API call can be made on an unpublished API on /apis/{apiName}/publish specifying the name of the API.
For an exact view of the changes to contract APIs, see PR #1322.
"},{"location":"releasenotes/1.3_migration_guide/#token-pools","title":"Token pools","text":"Previously, to create a token pool a POST call would be made to /tokens/pools and the token pool would be broadcasted to all other namepsaces. In FireFly v1.3.0, this same call can be made with the publish=true query parameter, or a subsequent API call can be made on an unpublished token pool on /tokens/pools/{nameOrId}/publish specifying the name or ID of the token pool.
For an exact view of the changes to token pools, see PR #1261.
"},{"location":"releasenotes/1.3_migration_guide/#event-stream-considerations","title":"Event stream considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#single-event-stream-per-namespace","title":"Single event stream per namespace","text":"In this release, the model for event streams in a multi-party network has fundamentally changed. Previously, there was a single event stream for each blockchain plugin, even if this plugin served multiple namespaces. In FireFly v1.3.0 there is now a single event stream per namespace in the network.
When migrating from FireFly v1.2.X to v1.3.0, due to these changes, existing event streams will be rebuilt. This means that connectors will replay past events to FireFly, but FireFly will automatically de-duplicate them by design so this is a safe operation.
The migration to individual event streams promotes high-availability capability but is not itself a breaking change, however the ID format for event streams has changed. Event streams now follow the format <plugin_topic_name>/<namespace_name>. For example, an event stream for the default namespace with a plugin topic of 0 would now be: 0/default.
Summarily, these changes should not impact end-users of FireFly, but they're noted here as they are significant architectural changes to the relationships between namespaces, plugins, and connectors.
For an exact view of the changes, see PR #1388.
"},{"location":"releasenotes/1.3_migration_guide/#configuration-considerations","title":"Configuration considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#deprecated-configuration","title":"Deprecated configuration","text":"In FireFly v1.3.0 deprecated configuration options for the blockchain, database, dataexchange, sharedstorage and tokens plugins have been removed, and can no longer be provided.
For an exact view of the changes, see PR #1289.
"},{"location":"releasenotes/1.3_migration_guide/#token-pool-considerations","title":"Token pool considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#activity-indicator-changes","title":"Activity indicator changes","text":"Token pools have a status, when creating a token pool previously, it would go into a pending state immediately following creation, and then into a confirmed state when it has been confirmed on the chain. This behaviour is still consistent in FireFly v1.3.0, but the representation of the data has changed.
Previously, token pools had a state field with an enumerated value which was either pending, or confirmed, this has been replaced with an active boolean field, where true indicates the token pool has been committed onto chain, and false indicated the transaction has not yet been confirmed.
For an exact view of the changes, see PR #1305.
"},{"location":"releasenotes/1.3_migration_guide/#fabconnect-event-considerations","title":"FabConnect event considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#fabconnect-protocol-id-format-changes","title":"FabConnect Protocol ID format changes","text":"Prior to FireFly v1.3.0, when the FabConnect client indexed events submitted by the Fabric SDK, FireFly would deduplicate events into a single event because the protocol ID of the events compiled into a single block would evaluate to be the same. In this release, we have changed the format of the calculated protocol ID so that is unique across events even if they are located within the same block. Crucially, the new format includes the transaction hash, so events are no longer alphanumerically sortable.
For an exact view of the changes, see PR #1345.
"},{"location":"releasenotes/1.3_migration_guide/#local-development-considerations","title":"Local development considerations","text":""},{"location":"releasenotes/1.3_migration_guide/#go-version-upgrade","title":"Go version upgrade","text":"FireFly v1.3.0 now uses Go 1.21 across all modules.
"},{"location":"swagger/","title":"API Spec","text":"This is the FireFly OpenAPI Specification document generated by FireFly
Note: The 'Try it out' buttons will not work on this page because it's not running against a live version of FireFly. To actually try it out, we recommend using the FireFly CLI to start an instance on your local machine (which will start the FireFly core on port 5000 by default) and then open the Swagger UI associated with your local node by opening a new tab and visiting http://localhost:5000/api
"},{"location":"troubleshooting/","title":"Troubleshooting","text":"This section includes troubleshooting tips for identifying issues with a running FireFly node, and for gathering useful data before opening an issue.
"},{"location":"troubleshooting/undelivered_messages/","title":"Undelivered messages","text":"When using FireFly in multiparty mode to deliver broadcast or private messages, one potential problem is that of undelivered messages. In general FireFly's message delivery service should be extremely reliable, but understanding when something has gone wrong (and how to recover) can be important for maintaining system health.
"},{"location":"troubleshooting/undelivered_messages/#background","title":"Background","text":"This guide assumes some familiarity with how multiparty event sequencing works. In general, FireFly messages come in three varieties:
All messages are batched for efficiency, but in cases of low throughput, you may frequently see batches containing exactly one message.
\"Pinned\" messages are those that use the blockchain ledger for reliable timestamping and ordering. These messages have two pieces which must be received before the message can be processed: the batch is the actual contents of the message(s), and the pin is the lightweight blockchain transaction that records the existence and ordering of that batch. We frequently refer to this combination as a batch-pin.
Note: there is a fourth type of message denoted with the type \"definition\", used for things such as identitity claims and advertisement of contract APIs. For most troubleshooting purposes these can be treated the same as pinned broadcast messages, as they follow the same pattern (with only a few additional processings steps inside FireFly).
"},{"location":"troubleshooting/undelivered_messages/#symptoms","title":"Symptoms","text":"When some part of the multiparty messaging infrastructure requires troubleshooting, common symptoms include:
When troubleshooting one of the symptoms above, the main goal is to identify the specific piece of the infrastructure that is experiencing an issue. This can lead you to diagnose specific issues such as misconfiguration, network problems, database integrity problems, or potential code bugs.
In all cases, the batch ID is the most critical piece of data for determining the nature of the issue. You can usually retrieve the batch for a particular message by querying /messages/<message-id> and looking for the batch field in the returned response. In rare cases, if this is not populated, you can also retrieve the message transaction via /messages/<message-id>/transaction, and then you can use the transaction ID to query /batches?tx.id=<transaction-id>.
The batch ID will be the same on all nodes involved in the messaging flow. Therefore, the following two steps can be easily performed to check for the existence of the expected items:
/batches/<batch-id> on each node that should have the message/pins?batch=<batch-id> on each node that should have the message (for pinned messages only)Then choose one of these scenarios to focus in on an area of interest:
"},{"location":"troubleshooting/undelivered_messages/#1-is-the-batch-missing-on-a-node-that-should-have-received-it","title":"1) Is the batch missing on a node that should have received it?","text":"For private messages, this indicates a potential problem with data exchange. Check the sending node to see if the FireFly operations succeeded when sending the batch via data exchange, and check the data exchange logs for any issues processing it (the FireFly operation ID can be used to trace the operation through data exchange as well). If an operation failed on the sending node, you may need to retry it with /operations/<op-id>/retry.
For broadcast messages, this indicates a potential problem with IPFS. Check the sending node to see if the FireFly operations succeeded when uploading the batch to IPFS, and the receiving node to see if the operations succeeded when downloading the batch from IPFS. If an operation failed, you may need to retry it with /operations/<op-id>/retry.
This indicates a potential problem with the blockchain connector. Check if the underlying blockchain node is healthy and mining blocks. Check the sending FireFly node to see if the operation succeeded when pinning the batch via the blockchain. Check the blockchain connector logs (such as evmconnect or fabconnect) to see if it is successfully processing events from the blockchain, or if it is encountering any errors before forwarding those events on to FireFly.
"},{"location":"troubleshooting/undelivered_messages/#3-are-the-batch-and-pin-both-present-but-the-messages-from-the-batch-are-still-stuck-in-sent-or-pending","title":"3) Are the batch and pin both present, but the messages from the batch are still stuck in \"sent\" or \"pending\"?","text":"Check the pin details to see if it contains a field \"dispatched\": true. If this field is false or missing, it means that the pin was received but couldn't be matched successfully with the off-chain batch contents. Check the FireFly logs and search for the batch ID - likely this issue is in FireFly and it will have logged some problem while aggregating the batch-pin. In some cases, the FireFly logs may indicate that the pin could not be dispatched because it was \"stuck\" behind another pin on the same context - so you may need to follow the trail to a batch-pin for a different batch and determine why that earlier one was not processed (by starting over on this rubric and troubleshooting that batch).
It's possible that the above steps may lead to an obvious solution (such as recovering a crashed service or retrying a failed operation). If they do not, you can open an issue. The more detail you can include from the troubleshooting above (including the type of message, the nodes involved, and the details on the batch and pin found when examining each node), the more likely it is that someone can help to suggest additional troubleshooting. Full logs from FireFly, and (as deemed relevant from the troubleshooting above) full logs from the data exchange or blockchain connector runtimes, will also make it easier to offer additional insight.
"},{"location":"tutorials/basic_auth/","title":"Basic Auth","text":""},{"location":"tutorials/basic_auth/#quick-reference","title":"Quick reference","text":"FireFly has a pluggable auth system which can be enabled at two different layers of the stack. At the top, auth can be enabled at the HTTP listener level. This will protect all requests to the given listener. FireFly has three different HTTP listeners, which could each use a different auth scheme:
Auth can also be enabled at the namespace level within FireFly as well. This enables several different use cases. For example, you might have two different teams that want to use the same FireFly node, each with different sets of authorized users. You could configure them to use separate namespaces and create separate auth schemes on each.
FireFly has a basic auth plugin built in, which we will be configuring in this tutorial.
NOTE: This guide assumes that you have already gone through the Getting Started Guide and have set up and run a stack at least once.
"},{"location":"tutorials/basic_auth/#additional-info","title":"Additional info","text":"FireFly's built in basic auth plugin uses a password hash file to store the list of authorized users. FireFly uses the bcrypt algorithm to compare passwords against the stored hash. You can use htpasswd on a command line to generate a hash file.
test_users password hash file","text":"touch test_users\n"},{"location":"tutorials/basic_auth/#create-a-user-named-firefly","title":"Create a user named firefly","text":"htpasswd -B test_users firefly\n You will be prompted to type the password for the new user twice. Optional: You can continue to add new users by running this command with a different username.
htpasswd -B test_users <username>\n"},{"location":"tutorials/basic_auth/#enable-basic-auth-at-the-namespace-level","title":"Enable basic auth at the Namespace level","text":"To enable auth at the HTTP listener level we will need to edit the FireFly core config file. You can find the config file for the first node in your stack at the following path:
~/.firefly/stacks/<stack_name>/runtime/config/firefly_core_0.yml\n Open the config file in your favorite editor and add the auth section to the plugins list:
plugins:\n auth:\n - name: test_user_auth\n type: basic\n basic:\n passwordfile: /etc/firefly/test_users\n You will also need to add test_user_auth to the list of plugins used by the default namespace:
namespaces:\n predefined:\n - plugins:\n - database0\n - blockchain0\n - dataexchange0\n - sharedstorage0\n - erc20_erc721\n - test_user_auth\n"},{"location":"tutorials/basic_auth/#mount-the-password-hash-file-in-the-docker-container","title":"Mount the password hash file in the Docker container","text":"If you set up your FireFly stack using the FireFly CLI we will need to mount the password hash file in the Docker container, so that FireFly can actually read the file. This can be done by editing the docker-compose.override.yml file at:
~/.firefly/stacks/<stack_name>/docker-compose.override.yml\n Edit the file to look like this, replacing the path to your test_users file:
# Add custom config overrides here\n# See https://docs.docker.com/compose/extends\nversion: \"2.1\"\nservices:\n firefly_core_0:\n volumes:\n - PATH_TO_YOUR_TEST_USERS_FILE:/etc/firefly/test_users\n"},{"location":"tutorials/basic_auth/#restart-your-firefly-core-container","title":"Restart your FireFly Core container","text":"To restart your FireFly stack and have Docker pick up the new volume, run:
ff stop <stack_name>\nff start <stack_name>\n NOTE: The FireFly basic auth plugin reads this file at startup and will not read it again during runtime. If you add any users or change passwords, restarting the node will be necessary to use an updated file.
"},{"location":"tutorials/basic_auth/#test-basic-auth","title":"Test basic auth","text":"After FireFly starts back up, you should be able to test that auth is working correctly by making an unauthenticated request to the API:
curl http://localhost:5000/api/v1/status\n{\"error\":\"FF00169: Unauthorized\"}\n However, if we add the username and password that we created above, the request should still work:
curl -u \"firefly:firefly\" http://localhost:5000/api/v1/status\n{\"namespace\":{\"name\":\"default\",\"networkName\":\"default\",\"description\":\"Default predefined namespace\",\"created\":\"2022-10-18T16:35:57.603205507Z\"},\"node\":{\"name\":\"node_0\",\"registered\":false},\"org\":{\"name\":\"org_0\",\"registered\":false},\"plugins\":{\"blockchain\":[{\"name\":\"blockchain0\",\"pluginType\":\"ethereum\"}],\"database\":[{\"name\":\"database0\",\"pluginType\":\"sqlite3\"}],\"dataExchange\":[{\"name\":\"dataexchange0\",\"pluginType\":\"ffdx\"}],\"events\":[{\"pluginType\":\"websockets\"},{\"pluginType\":\"webhooks\"},{\"pluginType\":\"system\"}],\"identity\":[],\"sharedStorage\":[{\"name\":\"sharedstorage0\",\"pluginType\":\"ipfs\"}],\"tokens\":[{\"name\":\"erc20_erc721\",\"pluginType\":\"fftokens\"}]},\"multiparty\":{\"enabled\":true,\"contract\":{\"active\":{\"index\":0,\"location\":{\"address\":\"0xa750e2647e24828f4fec2e6e6d61fc08ccca5efa\"},\"info\":{\"subscription\":\"sb-d0642f14-f89a-41bb-6fd4-ae74b9501b6c\",\"version\":2}}}}}\n"},{"location":"tutorials/basic_auth/#enable-auth-at-the-http-listener-level","title":"Enable auth at the HTTP listener level","text":"You may also want to enable auth at the HTTP listener level, for instance on the SPI (Service Provider Interface) to limit administrative actions. To enable auth at the HTTP listener level we will need to edit the FireFly core config file. You can find the config file for the first node in your stack at the following path:
~/.firefly/stacks/<stack_name>/runtime/config/firefly_core_0.yml\n Open the config file in your favorite editor and change the spi section to look like the following:
spi:\n address: 0.0.0.0\n enabled: true\n port: 5101\n publicURL: http://127.0.0.1:5101\n auth:\n type: basic\n basic:\n passwordfile: /etc/firefly/test_users\n"},{"location":"tutorials/basic_auth/#restart-firefly-to-apply-the-changes","title":"Restart FireFly to apply the changes","text":"NOTE You will need to mount the password hash file following the instructions above if you have not already.
You can run the following to restart your stack:
ff stop <stack_name>\nff start <stack_name>\n"},{"location":"tutorials/basic_auth/#test-basic-auth_1","title":"Test basic auth","text":"After FireFly starts back up, you should be able to query the SPI and the request should be unauthorized.
curl http://127.0.0.1:5101/spi/v1/namespaces\n{\"error\":\"FF00169: Unauthorized\"}\n Adding the username and password that we set earlier, should make the request succeed.
curl -u \"firefly:firefly\" http://127.0.0.1:5101/spi/v1/namespaces\n[{\"name\":\"default\",\"networkName\":\"default\",\"description\":\"Default predefined namespace\",\"created\":\"2022-10-18T16:35:57.603205507Z\"}]\n"},{"location":"tutorials/broadcast_data/","title":"Broadcast data","text":""},{"location":"tutorials/broadcast_data/#quick-reference","title":"Quick reference","text":"message visible to all parties in the networkmessage has one or more attached pieces of business datadatatypebatch can pin hundreds of message broadcastsPOST /api/v1/namespaces/default/messages/broadcast
{\n \"data\": [\n {\n \"value\": \"a string\"\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#example-message-response","title":"Example message response","text":"{\n \"header\": {\n \"id\": \"607e22ad-04fa-434a-a073-54f528ca14fb\", // uniquely identifies this broadcast message\n \"type\": \"broadcast\", // set automatically\n \"txtype\": \"batch_pin\", // message will be batched, and sequenced via the blockchain\n \"author\": \"0x0a65365587a65ce44938eab5a765fe8bc6532bdf\", // set automatically in this example to the node org\n \"created\": \"2021-07-01T18:06:24.5817016Z\", // set automatically\n \"namespace\": \"default\", // the 'default' namespace was set in the URL\n \"topics\": [\n \"default\" // the default topic that the message is published on, if no topic is set\n ],\n // datahash is calculated from the data array below\n \"datahash\": \"5a7bbc074441fa3231d9c8fc942d68ef9b9b646dd234bb48c57826dc723b26fd\"\n },\n \"hash\": \"81acf8c8f7982dbc49258535561461601cbe769752fecec0f8ce0358664979e6\", // hash of the header\n \"state\": \"ready\", // this message is stored locally but not yet confirmed\n \"data\": [\n // one item of data was stored\n {\n \"id\": \"8d8635e2-7c90-4963-99cc-794c98a68b1d\", // can be used to query the data in the future\n \"hash\": \"c95d6352f524a770a787c16509237baf7eb59967699fb9a6d825270e7ec0eacf\" // sha256 hash of `\"a string\"`\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#example-2-inline-object-data-to-a-topic-no-datatype-verification","title":"Example 2: Inline object data to a topic (no datatype verification)","text":"It is very good practice to set a tag and topic in each of your messages:
tag should tell the apps receiving the broadcast (including the local app), what to do when it receives the message. Its the reason for the broadcast - an application specific type for the message.topic should be something like a well known identifier that relates to the information you are publishing. It is used as an ordering context, so all broadcasts on a given topic are assured to be processed in order.POST /api/v1/namespaces/default/messages/broadcast
{\n \"header\": {\n \"tag\": \"new_widget_created\",\n \"topics\": [\"widget_id_12345\"]\n },\n \"data\": [\n {\n \"value\": {\n \"id\": \"widget_id_12345\",\n \"name\": \"superwidget\"\n }\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#notes-on-why-setting-a-topic-is-important","title":"Notes on why setting a topic is important","text":"The FireFly aggregator uses the topic (obfuscated on chain) to determine if a message is the next message in an in-flight sequence for any groups the node is involved in. If it is, then that message must receive all off-chain private data and be confirmed before any subsequent messages can be confirmed on the same sequence.
So if you use the same topic in every message, then a single failed send on one topic blocks delivery of all messages between those parties, until the missing data arrives.
Instead it is best practice to set the topic on your messages to a value that identifies an ordered stream of business processing. Some examples:
The topic field is an array, because there are cases (such as merging two identifiers) where you need a message to be deterministically ordered across multiple sequences. However, this is an advanced use case and you are likely to set a single topic on the vast majority of your messages.
Here we make two API calls.
Create the data object explicitly, using a multi-part form upload
You can also just post JSON to this endpoint
Broadcast a message referring to that data
The Blob attachment gets published to shared storage
Example curl command (Linux/Mac) to grab an image from the internet, and pipe it into a multi-part form post to FireFly.
Note we use autometa to cause FireFly to automatically add the filename, and size, to the JSON part of the data object for us.
curl -sLo - https://github.com/hyperledger/firefly/raw/main/docs/firefly_logo.png \\\n | curl --form autometa=true --form file=@- \\\n http://localhost:5000/api/v1/namespaces/default/data\n"},{"location":"tutorials/broadcast_data/#example-data-response-from-blob-upload","title":"Example data response from Blob upload","text":"Status: 200 OK - your data is uploaded to your local FireFly node
At this point the data has not be shared with anyone else in the network
{\n // A uniquely generated ID, we can refer to when sending this data to other parties\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\",\n \"validator\": \"json\", // the \"value\" part is JSON\n \"namespace\": \"default\", // from the URL\n // The hash is a combination of the hash of the \"value\" metadata, and the\n // hash of the blob\n \"hash\": \"997af6a9a19f06cc8a46872617b8bf974b106f744b2e407e94cc6959aa8cf0b8\",\n \"created\": \"2021-07-01T20:20:35.5462306Z\",\n \"value\": {\n \"filename\": \"-\", // dash is how curl represents the filename for stdin\n \"size\": 31185 // the size of the blob data\n },\n \"blob\": {\n // A hash reference to the blob\n \"hash\": \"86e6b39b04b605dd1b03f70932976775962509d29ae1ad2628e684faabe48136\"\n // Note at this point there is no public reference. The only place\n // this data has been uploaded to is our own private data exchange.\n // It's ready to be published to everyone (broadcast), or privately\n // transferred (send) to other parties in the network. But that hasn't\n // happened yet.\n }\n}\n"},{"location":"tutorials/broadcast_data/#broadcast-the-uploaded-data","title":"Broadcast the uploaded data","text":"Just include a reference to the id returned from the upload.
POST /api/v1/namespaces/default/messages/broadcast
{\n \"data\": [\n {\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\"\n }\n ]\n}\n"},{"location":"tutorials/broadcast_data/#broadcasting-messages-using-the-sandbox","title":"Broadcasting Messages using the Sandbox","text":"All of the functionality discussed above can be done through the FireFly Sandbox.
To get started, open up the Web UI and Sanbox UI for at least one of your members. The URLs for these were printed in your terminal when you started your FireFly stack.
In the sandbox, enter your message into the message field as seen in the screenshot below.
Notice how the data field in the center panel updates in real time.
Click the blue Run button. This should return a 202 response immediately in the Server Response section and will populate the right hand panel with transaction information after a few seconds.
Go back to the FireFly UI (the URL for this would have been shown in the terminal when you started the stack) and you'll see your successful blockchain transaction
"},{"location":"tutorials/create_custom_identity/","title":"Create a Custom Identity","text":""},{"location":"tutorials/create_custom_identity/#quick-reference","title":"Quick reference","text":"Out of the box, a FireFly Supernode contains both an org and a node identity. Your use case might demand more granular notions of identity (ex. customers, clients, etc.). Instead of creating a Supernode for each identity, you can create multiple custom identities within a FireFly Supernode.
If you haven't started a FireFly stack already, please go to the Getting Started guide on how to Start your environment
\u2190 \u2461 Start your environment
"},{"location":"tutorials/create_custom_identity/#step-1-create-a-new-account","title":"Step 1: Create a new account","text":"The FireFly CLI has a helpful command to create an account in a local development environment for you.
NOTE: In a production environment, key management actions such as creation, encryption, unlocking, etc. may be very different, depending on what type of blockchain node and signer your specific deployment is using.
To create a new account on your local stack, run:
ff accounts create <stack_name>\n {\n \"address\": \"0xc00109e112e21165c7065da776c75cfbc9cdc5e7\",\n \"privateKey\": \"...\"\n}\n The FireFly CLI has created a new private key and address for us to be able to use, and it has loaded the encrypted private key into the signing container. However, we haven't told FireFly itself about the new key, or who it belongs to. That's what we'll do in the next steps.
"},{"location":"tutorials/create_custom_identity/#step-2-query-the-parent-org-for-its-uuid","title":"Step 2: Query the parent org for its UUID","text":"If we want to create a new custom identity under the organizational identity that we're using in a multiparty network, first we will need to look up the UUID for our org identity. We can look that up by making a GET request to the status endpoint on the default namespace.
GET http://localhost:5000/api/v1/status
{\n \"namespace\": {...},\n \"node\": {...},\n \"org\": {\n \"name\": \"org_0\",\n \"registered\": true,\n \"did\": \"did:firefly:org/org_0\",\n \"id\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\", // We need this in Step 3\n \"verifiers\": [\n {\n \"type\": \"ethereum_address\",\n \"value\": \"0xd7320c76a2efc1909196dea876c4c7dabe49c0f4\"\n }\n ]\n },\n \"plugins\": {...},\n \"multiparty\": {...}\n}\n"},{"location":"tutorials/create_custom_identity/#step-3-register-the-new-custom-identity-with-firefly","title":"Step 3: Register the new custom identity with FireFly","text":"Now we can POST to the identities endpoint to create a new custom identity. We will include the UUID of the organizational identity from the previous step in the \"parent\" field in the request.
POST http://localhost:5000/api/v1/identities
{\n \"name\": \"myCustomIdentity\",\n \"key\": \"0xc00109e112e21165c7065da776c75cfbc9cdc5e7\", // Signing Key from Step 1\n \"parent\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\" // Org UUID from Step 2\n}\n"},{"location":"tutorials/create_custom_identity/#response_1","title":"Response","text":"{\n \"id\": \"5ea8f770-e004-48b5-af60-01994230ed05\",\n \"did\": \"did:firefly:myCustomIdentity\",\n \"type\": \"custom\",\n \"parent\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\",\n \"namespace\": \"\",\n \"name\": \"myCustomIdentity\",\n \"messages\": {\n \"claim\": \"817b7c79-a934-4936-bbb1-7dcc7c76c1f4\",\n \"verification\": \"ae55f998-49b1-4391-bed2-fa5e86dc85a2\",\n \"update\": null\n }\n}\n"},{"location":"tutorials/create_custom_identity/#step-4-query-the-new-custom-identity","title":"Step 4: Query the New Custom Identity","text":"Lastly, if we want to confirm that the new identity has been created, we can query the identities endpoint to see our new custom identity.
"},{"location":"tutorials/create_custom_identity/#request_2","title":"Request","text":"GET http://localhost:5000/api/v1/identities?fetchverifiers=true
NOTE: Using fetchverifiers=true will return the cryptographic verification mechanism for the FireFly identity.
[\n {\n \"id\": \"5ea8f770-e004-48b5-af60-01994230ed05\",\n \"did\": \"did:firefly:myCustomIdentity\",\n \"type\": \"custom\",\n \"parent\": \"1c0abf75-0f3a-40e4-a8cd-5ff926f80aa8\",\n \"namespace\": \"default\",\n \"name\": \"myCustomIdentity\",\n \"messages\": {\n \"claim\": \"817b7c79-a934-4936-bbb1-7dcc7c76c1f4\",\n \"verification\": \"ae55f998-49b1-4391-bed2-fa5e86dc85a2\",\n \"update\": null\n },\n \"created\": \"2022-09-19T18:10:47.365068013Z\",\n \"updated\": \"2022-09-19T18:10:47.365068013Z\",\n \"verifiers\": [\n {\n \"type\": \"ethereum_address\",\n \"value\": \"0xfe1ea8c8a065a0cda424e2351707c7e8eb4d2b6f\"\n }\n ]\n },\n { ... },\n { ... }\n]\n"},{"location":"tutorials/define_datatype/","title":"Define a datatype","text":""},{"location":"tutorials/define_datatype/#quick-reference","title":"Quick reference","text":"As your use case matures, it is important to agree formal datatypes between the parties. These canonical datatypes need to be defined and versioned, so that each member can extract and transform data from their internal systems into this datatype.
Datatypes are broadcast to the network so everybody refers to the same JSON schema when validating their data. The broadcast must complete before a datatype can be used by an application to upload/broadcast/send data. The same system of broadcast within FireFly is used to broadcast definitions of datatypes, as is used to broadcast the data itself.
"},{"location":"tutorials/define_datatype/#additional-info","title":"Additional info","text":"POST /api/v1/namespaces/{ns}/datatypes
{\n \"name\": \"widget\",\n \"version\": \"0.0.2\",\n \"value\": {\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n }\n }\n}\n"},{"location":"tutorials/define_datatype/#example-message-response","title":"Example message response","text":"Status: 202 Accepted - a broadcast message has been sent, and on confirmation the new datatype will be created (unless it conflicts with another definition with the same name and version that was ordered onto the blockchain before this definition).
{\n \"header\": {\n \"id\": \"727f7d3a-d07e-4e80-95af-59f8d2ac7531\", // this is the ID of the message, not the data type\n \"type\": \"definition\", // a special type for system broadcasts\n \"txtype\": \"batch_pin\", // the broadcast is pinned to the chain\n \"author\": \"0x0a65365587a65ce44938eab5a765fe8bc6532bdf\", // the local identity\n \"created\": \"2021-07-01T21:06:26.9997478Z\", // the time the broadcast was sent\n \"namespace\": \"ff_system\", // the data/message broadcast happens on the system namespace\n \"topic\": [\n \"ff_ns_default\" // the namespace itself is used in the topic\n ],\n \"tag\": \"ff_define_datatype\", // a tag instructing FireFly to process this as a datatype definition\n \"datahash\": \"56bd677e3e070ba62f547237edd7a90df5deaaf1a42e7d6435ec66a587c14370\"\n },\n \"hash\": \"5b6593720243831ba9e4ad002c550e95c63704b2c9dbdf31135d7d9207f8cae8\",\n \"state\": \"ready\", // this message is stored locally but not yet confirmed\n \"data\": [\n {\n \"id\": \"7539a0ab-78d8-4d42-b283-7e316b3afed3\", // this data object in the ff_system namespace, contains the schema\n \"hash\": \"22ba1cdf84f2a4aaffac665c83ff27c5431c0004dc72a9bf031ae35a75ac5aef\"\n }\n ]\n}\n"},{"location":"tutorials/define_datatype/#lookup-the-confirmed-data-type","title":"Lookup the confirmed data type","text":"GET /api/v1/namespaces/default/datatypes?name=widget&version=0.0.2
[\n {\n \"id\": \"421c94b1-66ce-4ba0-9794-7e03c63df29d\", // an ID allocated to the datatype\n \"message\": \"727f7d3a-d07e-4e80-95af-59f8d2ac7531\", // the message that broadcast this data type\n \"validator\": \"json\", // the type of validator that this datatype can be used for (this one is JSON Schema)\n \"namespace\": \"default\", // the namespace of the datatype\n \"name\": \"widget\", // the name of the datatype\n \"version\": \"0.0.2\", // the version of the data type\n \"hash\": \"a4dceb79a21937ca5ea9fa22419011ca937b4b8bc563d690cea3114af9abce2c\", // hash of the schema itself\n \"created\": \"2021-07-01T21:06:26.983986Z\", // time it was confirmed\n \"value\": {\n // the JSON schema itself\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n }\n }\n }\n]\n"},{"location":"tutorials/define_datatype/#example-private-send-referring-to-the-datatype","title":"Example private send referring to the datatype","text":"Once confirmed, a piece of data can be assigned that datatype and all FireFly nodes will verify it against the schema. On a sending node, the data will be rejected at upload/send time if it does not conform. On other nodes, bad data results in a message_rejected event (rather than message_confirmed) for any message that arrives referring to that data.
POST /api/v1/namespaces/default/send/message
{\n \"header\": {\n \"tag\": \"new_widget_created\",\n \"topic\": [\"widget_id_12345\"]\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"datatype\": {\n \"name\": \"widget\",\n \"version\": \"0.0.2\"\n },\n \"value\": {\n \"id\": \"widget_id_12345\",\n \"name\": \"superwidget\"\n }\n }\n ]\n}\n"},{"location":"tutorials/define_datatype/#defining-datatypes-using-the-sandbox","title":"Defining Datatypes using the Sandbox","text":"You can also define a datatype through the FireFly Sandbox.
To get started, open up the Web UI and Sanbox UI for at least one of your members. The URLs for these were printed in your terminal when you started your FireFly stack.
In the sandbox, enter the datatype's name, version, and JSON Schema as seen in the screenshot below.
{\n \"name\": \"widget\",\n \"version\": \"0.0.2\",\n \"value\": {\n \"$id\": \"https://example.com/widget.schema.json\",\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"title\": \"Widget\",\n \"type\": \"object\",\n \"properties\": {\n \"id\": {\n \"type\": \"string\",\n \"description\": \"The unique identifier for the widget.\"\n },\n \"name\": {\n \"type\": \"string\",\n \"description\": \"The person's last name.\"\n }\n }\n }\n}\n Notice how the data field in the center panel updates in real time.
Click the blue Run button. This should return a 202 response immediately in the Server Response section and will populate the right hand panel with transaction information after a few seconds.
Go back to the FireFly UI (the URL for this would have been shown in the terminal when you started the stack) and you'll see that you've successfully defined your datatype
"},{"location":"tutorials/events/","title":"Listen for events","text":""},{"location":"tutorials/events/#quick-reference","title":"Quick reference","text":"Probably the most important aspect of FireFly is that it is an event-driven programming model.
Parties interact by sending messages and transactions to each other, on and off chain. Once aggregated and confirmed those events drive processing in the other party.
This allows orchestration of complex multi-party system applications and business processes.
FireFly provides each party with their own private history, that includes all exchanges outbound and inbound performed through the node into the multi-party system. That includes blockchain backed transactions, as well as completely off-chain message exchanges.
The event transports are pluggable. The core transports are WebSockets and Webhooks. We focus on WebSockets in this getting started guide.
Check out the Request/Reply section for more information on Webhooks
"},{"location":"tutorials/events/#additional-info","title":"Additional info","text":"The simplest way to get started consuming events, is with an ephemeral WebSocket listener.
Example connection URL:
ws://localhost:5000/ws?namespace=default&ephemeral&autoack&filter.events=message_confirmed
namespace=default - event listeners are scoped to a namespaceephemeral - listen for events that occur while this connection is active, but do not remember the app instance (great for UIs)autoack- automatically acknowledge each event, so the next event is sent (great for UIs)filter.events=message_confirmed - only listen for events resulting from a message confirmationThere are a number of browser extensions that let you experiment with WebSockets:
"},{"location":"tutorials/events/#example-event-payload","title":"Example event payload","text":"The events (by default) do not contain the payload data, just the event and referred message. This means the WebSocket payloads are a predictably small size, and the application can use the information in the message to post-filter the event to decide if it needs to download the full data.
There are server-side filters provided on events as well
{\n \"id\": \"8f0da4d7-8af7-48da-912d-187979bf60ed\",\n \"sequence\": 61,\n \"type\": \"message_confirmed\",\n \"namespace\": \"default\",\n \"reference\": \"9710a350-0ba1-43c6-90fc-352131ce818a\",\n \"created\": \"2021-07-02T04:37:47.6556589Z\",\n \"subscription\": {\n \"id\": \"2426c5b1-ffa9-4f7d-affb-e4e541945808\",\n \"namespace\": \"default\",\n \"name\": \"2426c5b1-ffa9-4f7d-affb-e4e541945808\"\n },\n \"message\": {\n \"header\": {\n \"id\": \"9710a350-0ba1-43c6-90fc-352131ce818a\",\n \"type\": \"broadcast\",\n \"txtype\": \"batch_pin\",\n \"author\": \"0x1d14b65d2dd5c13f6cb6d3dc4aa13c795a8f3b28\",\n \"created\": \"2021-07-02T04:37:40.1257944Z\",\n \"namespace\": \"default\",\n \"topic\": [\"default\"],\n \"datahash\": \"cd6a09a15ccd3e6ed1d67d69fa4773b563f27f17f3eaad611a2792ba945ca34f\"\n },\n \"hash\": \"1b6808d2b95b418e54e7bd34593bfa36a002b841ac42f89d00586dac61e8df43\",\n \"batchID\": \"16ffc02c-8cb0-4e2f-8b58-a707ad1d1eae\",\n \"state\": \"confirmed\",\n \"confirmed\": \"2021-07-02T04:37:47.6548399Z\",\n \"data\": [\n {\n \"id\": \"b3a814cc-17d1-45d5-975e-90279ed2c3fc\",\n \"hash\": \"9ddefe4435b21d901439e546d54a14a175a3493b9fd8fbf38d9ea6d3cbf70826\"\n }\n ]\n }\n}\n"},{"location":"tutorials/events/#download-the-message-and-data","title":"Download the message and data","text":"A simple REST API is provided to allow you to download the data associated with the message:
GET /api/v1/namespaces/default/messages/{id}?data=true
As you already have the message object in the event delivery, you can query just the array of data objects as follows:
GET /api/v1/namespaces/default/messages/{id}/data
To reliably process messages within your application, you should first set up a subscription.
A subscription requests that:
This should be combined with manual acknowledgment of the events, where the application sends a payload such as the following in response to each event it receives (where the id comes from the event it received):
{ \"type\": \"ack\", \"id\": \"617db63-2cf5-4fa3-8320-46150cbb5372\" }\n You must send an acknowledgement for every message, or you will stop receiving messages.
"},{"location":"tutorials/events/#set-up-the-websocket-subscription","title":"Set up the WebSocket subscription","text":"Each subscription is scoped to a namespace, and must have a name. You can then choose to perform server-side filtering on the events using regular expressions matched against the information in the event.
POST /namespaces/default/subscriptions
{\n \"transport\": \"websockets\",\n \"name\": \"app1\",\n \"filter\": {\n \"blockchainevent\": {\n \"listener\": \".*\",\n \"name\": \".*\"\n },\n \"events\": \".*\",\n \"message\": {\n \"author\": \".*\",\n \"group\": \".*\",\n \"tag\": \".*\",\n \"topics\": \".*\"\n },\n \"transaction\": {\n \"type\": \".*\"\n }\n },\n \"options\": {\n \"firstEvent\": \"newest\",\n \"readAhead\": 50\n }\n}\n"},{"location":"tutorials/events/#connect-to-consume-messages","title":"Connect to consume messages","text":"Example connection URL:
ws://localhost:5000/ws?namespace=default&name=app1
namespace=default - event listeners are scoped to a namespacename=app1 - the subscription nameIf you are interested in learning more about events for custom smart contracts, please see the Working with custom smart contracts section.
"},{"location":"tutorials/private_send/","title":"Privately send data","text":""},{"location":"tutorials/private_send/#quick-reference","title":"Quick reference","text":"message to a restricted set of partiesmessage has one or more attached pieces of business datadatatypegroup specifies who has visibility to the datamessage_confirmed event occursbatch can pin hundreds of private message sendsmessage_confirmed event immediatelymessage_confirmed events as soon as the data arrivesPOST /api/v1/namespaces/default/messages/private
{\n \"data\": [\n {\n \"value\": \"a string\"\n }\n ],\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n }\n}\n"},{"location":"tutorials/private_send/#example-message-response","title":"Example message response","text":"Status: 202 Accepted - the message is on it's way, but has not yet been confirmed.
{\n \"header\": {\n \"id\": \"c387e9d2-bdac-44cc-9dd5-5e7f0b6b0e58\", // uniquely identifies this private message\n \"type\": \"private\", // set automatically\n \"txtype\": \"batch_pin\", // message will be batched, and sequenced via the blockchain\n \"author\": \"0x0a65365587a65ce44938eab5a765fe8bc6532bdf\", // set automatically in this example to the node org\n \"created\": \"2021-07-02T02:37:13.4642085Z\", // set automatically\n \"namespace\": \"default\", // the 'default' namespace was set in the URL\n // The group hash is calculated from the resolved list of group participants.\n // The first time a group is used, the participant list is sent privately along with the\n // batch of messages in a `groupinit` message.\n \"group\": \"2aa5297b5eed0c3a612a667c727ca38b54fb3b5cc245ebac4c2c7abe490bdf6c\",\n \"topics\": [\n \"default\" // the default topic that the message is published on, if no topic is set\n ],\n // datahash is calculated from the data array below\n \"datahash\": \"24b2d583b87eda952fa00e02c6de4f78110df63218eddf568f0240be3d02c866\"\n },\n \"hash\": \"423ad7d99fd30ff679270ad2b6b35cdd85d48db30bafb71464ca1527ce114a60\", // hash of the header\n \"state\": \"ready\", // this message is stored locally but not yet confirmed\n \"data\": [\n // one item of data was stored\n {\n \"id\": \"8d8635e2-7c90-4963-99cc-794c98a68b1d\", // can be used to query the data in the future\n \"hash\": \"c95d6352f524a770a787c16509237baf7eb59967699fb9a6d825270e7ec0eacf\" // sha256 hash of `\"a string\"`\n }\n ]\n}\n"},{"location":"tutorials/private_send/#example-2-unpinned-private-send-of-in-line-string-data","title":"Example 2: Unpinned private send of in-line string data","text":"Set header.txtype: \"none\" to disable pinning of the private message send to the blockchain. The message is sent immediately (no batching) over the private data exchange.
POST /api/v1/namespaces/default/messages/private
{\n \"header\": {\n \"txtype\": \"none\"\n },\n \"data\": [\n {\n \"value\": \"a string\"\n }\n ],\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n }\n}\n"},{"location":"tutorials/private_send/#example-3-inline-object-data-to-a-topic-no-datatype-verification","title":"Example 3: Inline object data to a topic (no datatype verification)","text":"It is very good practice to set a tag and topic in each of your messages:
tag should tell the apps receiving the private send (including the local app), what to do when it receives the message. Its the reason for the send - an application specific type for the message.topic should be something like a well known identifier that relates to the information you are publishing. It is used as an ordering context, so all sends on a given topic are assured to be processed in order.POST /api/v1/namespaces/default/messages/private
{\n \"header\": {\n \"tag\": \"new_widget_created\",\n \"topics\": [\"widget_id_12345\"]\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": {\n \"id\": \"widget_id_12345\",\n \"name\": \"superwidget\"\n }\n }\n ]\n}\n"},{"location":"tutorials/private_send/#notes-on-why-setting-a-topic-is-important","title":"Notes on why setting a topic is important","text":"The FireFly aggregator uses the topic (obfuscated on chain) to determine if a message is the next message in an in-flight sequence for any groups the node is involved in. If it is, then that message must receive all off-chain private data and be confirmed before any subsequent messages can be confirmed on the same sequence.
So if you use the same topic in every message, then a single failed send on one topic blocks delivery of all messages between those parties, until the missing data arrives.
Instead it is best practice to set the topic on your messages to value that identifies an ordered stream of business processing. Some examples:
The topic field is an array, because there are cases (such as merging two identifiers) where you need a message to be deterministically ordered across multiple sequences. However, this is an advanced use case and you are likely to set a single topic on the vast majority of your messages.
Here we make two API calls.
Create the data object explicitly, using a multi-part form upload
You can also just post JSON to this endpoint
Privately send a message referring to that data
The Blob is sent privately to each party
Example curl command (Linux/Mac) to grab an image from the internet, and pipe it into a multi-part form post to FireFly.
Note we use autometa to cause FireFly to automatically add the filename, and size, to the JSON part of the data object for us.
curl -sLo - https://github.com/hyperledger/firefly/raw/main/docs/firefly_logo.png \\\n | curl --form autometa=true --form file=@- \\\n http://localhost:5000/api/v1/namespaces/default/data\n"},{"location":"tutorials/private_send/#example-data-response-from-blob-upload","title":"Example data response from Blob upload","text":"Status: 200 OK - your data is uploaded to your local FireFly node
At this point the data has not be shared with anyone else in the network
{\n // A uniquely generated ID, we can refer to when sending this data to other parties\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\",\n \"validator\": \"json\", // the \"value\" part is JSON\n \"namespace\": \"default\", // from the URL\n // The hash is a combination of the hash of the \"value\" metadata, and the\n // hash of the blob\n \"hash\": \"997af6a9a19f06cc8a46872617b8bf974b106f744b2e407e94cc6959aa8cf0b8\",\n \"created\": \"2021-07-01T20:20:35.5462306Z\",\n \"value\": {\n \"filename\": \"-\", // dash is how curl represents the filename for stdin\n \"size\": 31185 // the size of the blob data\n },\n \"blob\": {\n // A hash reference to the blob\n \"hash\": \"86e6b39b04b605dd1b03f70932976775962509d29ae1ad2628e684faabe48136\"\n }\n}\n"},{"location":"tutorials/private_send/#send-the-uploaded-data-privately","title":"Send the uploaded data privately","text":"Just include a reference to the id returned from the upload.
POST /api/v1/namespaces/default/messages/private
{\n \"data\": [\n {\n \"id\": \"97eb750f-0d0b-4c1d-9e37-1e92d1a22bb8\"\n }\n ],\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n }\n}\n"},{"location":"tutorials/private_send/#sending-private-messages-using-the-sandbox","title":"Sending Private Messages using the Sandbox","text":"All of the functionality discussed above can be done through the FireFly Sandbox.
To get started, open up the Web UI and Sanbox UI for at least one of your members. The URLs for these were printed in your terminal when you started your FireFly stack.
Make sure to expand the \"Send a Private Message\" section. Enter your message into the message field as seen in the screenshot below. Because we are sending a private message, make sure you're in the \"Send a Private Message\" section and that you choose a message recipient
Notice how the data field in the center panel updates in real time as you update the message you wish to send.
Click the blue Run button. This should return a 202 response immediately in the Server Response section and will populate the right hand panel with transaction information after a few seconds.
Go back to the FireFly UI (the URL for this would have been shown in the terminal when you started the stack) and you'll see your successful blockchain transaction. Compare the \"Recent Network Changes\" widget With private messages, your
"},{"location":"tutorials/query_messages/","title":"Explore messages","text":""},{"location":"tutorials/query_messages/#quick-reference","title":"Quick reference","text":"The FireFly Explorer is a great way to view the messages sent and received by your node.
Just open /ui on your FireFly node to access it.
This builds on the APIs to query and filter messages, described below
"},{"location":"tutorials/query_messages/#additional-info","title":"Additional info","text":"These are the messages ready to be processed in your application. All data associated with the message (including Blob attachments) is available, and if they are sequenced by the blockchain, then those blockchain transactions are complete.
The order in which you process messages should be determined by absolute order of message_confirmed events - queryable via the events collection, or through event listeners (discussed next in the getting started guide).
That is because messages are ordered by timestamp, which is potentially subject to adjustments of the clock. Whereas events are ordered by the insertion order into the database, and as such changes in the clock do not affect the order.
GET /api/v1/namespaces/{ns}/messages?pending=false&limit=100
[\n {\n \"header\": {\n \"id\": \"423302bb-abfc-4d64-892d-38b2fdfe1549\",\n \"type\": \"private\", // this was a private send\n \"txtype\": \"batch_pin\", // pinned in a batch to the blockchain\n \"author\": \"0x1d14b65d2dd5c13f6cb6d3dc4aa13c795a8f3b28\",\n \"created\": \"2021-07-02T03:09:40.2606238Z\",\n \"namespace\": \"default\",\n \"group\": \"2aa5297b5eed0c3a612a667c727ca38b54fb3b5cc245ebac4c2c7abe490bdf6c\", // sent to this group\n \"topic\": [\"widget_id_12345\"],\n \"tag\": \"new_widget_created\",\n \"datahash\": \"551dd261e80ce76b1908c031cff8a707bd76376d6eddfdc1040c2ed6481ec8dd\"\n },\n \"hash\": \"bf2ca94db8c31bae3cae974bb626fa822c6eee5f572d274d72281e72537b30b3\",\n \"batch\": \"f7ac773d-885a-4d73-ac6b-c09f5346a051\", // the batch ID that pinned this message to the chain\n \"state\": \"confirmed\", // message is now confirmed\n \"confirmed\": \"2021-07-02T03:09:49.9207211Z\", // timestamp when this node confirmed the message\n \"data\": [\n {\n \"id\": \"914eed77-8789-451c-b55f-ba9570a71eba\",\n \"hash\": \"9541cabc750c692e553a421a6c5c07ebcae820774d2d8d0b88fac2a231c10bf2\"\n }\n ],\n \"pins\": [\n // A \"pin\" is an identifier that is used by FireFly for sequencing messages.\n //\n // For private messages, it is an obfuscated representation of the sequence of this message,\n // on a topic, within this group, from this sender. There will be one pin per topic. You will find these\n // pins in the blockchain transaction, as well as the off-chain data.\n // Each one is unqiue, and without the group hash, very difficult to correlate - meaning\n // the data on-chain provides a high level of privacy.\n //\n // Note for broadcast (which does not require obfuscation), it is simply a hash of the topic.\n // So you will see the same pin for all messages on the same topic.\n \"ee56de6241522ab0ad8266faebf2c0f1dc11be7bd0c41d847998135b45685b77\"\n ]\n }\n]\n"},{"location":"tutorials/query_messages/#example-2-query-all-messages","title":"Example 2: Query all messages","text":"The natural sort order the API will return for messages is:
created timestamp orderconfirmed timestamp orderGET /api/v1/namespaces/{ns}/messages
At some point you may need to rotate certificates on your Data Exchange nodes. FireFly provides an API to update a node identity, but there are a few prerequisite steps to load a new certificate on the Data Exchange node itself. This guide will walk you through that process. For more information on different types of identities in FireFly, please see the Reference page on Identities.
NOTE: This guide assumes that you are working in a local development environment that was set up with the Getting Started Guide. For a production deployment, the exact process to accomplish each step may be different. For example, you may generate your certs with a CA, or in some other manner. But the high level steps remain the same.
The high level steps to the process (described in detail below) are:
peer-certs directoryPATCH the node identity using the FireFly APITo generate a new cert, we're going to use a self signed certificate generated by openssl. This is how the FireFly CLI generated the original cert that was used when it created your stack.
For the first member of a FireFly stack you run:
openssl req -new -x509 -nodes -days 365 -subj /CN=dataexchange_0/O=member_0 -keyout key.pem -out cert.pem\n For the second member:
openssl req -new -x509 -nodes -days 365 -subj /CN=dataexchange_1/O=member_1 -keyout key.pem -out cert.pem\n NOTE: If you perform these two commands in the same directory, the second one will overwrite the output of the first. It is advisable to run them in separate directories, or copy the cert and key to the Data Exchange file system (the next step below) before generating the next cert / key pair.
"},{"location":"tutorials/rotate_dx_certs/#install-the-new-certs-on-each-data-exchange-file-system","title":"Install the new certs on each Data Exchange File System","text":"For a dev environment created with the FireFly CLI, the certificate and key will be located in the /data directory on the Data Exchange node's file system. You can use the docker cp command to copy the file to the correct location, then set the file ownership correctly.
docker cp cert.pem dev_dataexchange_0:/data/cert.pem\ndocker exec dev_dataexchange_0 chown root:root /data/cert.pem\n NOTE: If your environment is not called dev you may need to change the beginning of the container name in the Docker commands listed in this guide.
peer-certs directory","text":"To clear out the old certs from the first Data Exchange node run:
docker exec dev_dataexchange_0 sh -c \"rm /data/peer-certs/*.pem\"\n To clear out the old certs from the second Data Exchange node run:
docker exec dev_dataexchange_1 sh -c \"rm /data/peer-certs/*.pem\"\n"},{"location":"tutorials/rotate_dx_certs/#restart-each-data-exchange-process","title":"Restart each Data Exchange process","text":"To restart your Data Exchange processes, run:
docker restart dev_dataexchange_0\n docker restart dev_dataexchange_1\n"},{"location":"tutorials/rotate_dx_certs/#patch-the-node-identity-using-the-firefly-api","title":"PATCH the node identity using the FireFly API","text":"The final step is to broadcast the new cert for each node, from the FireFly node that will be using that cert. You will need to lookup the UUID for the node identity in order to update it.
"},{"location":"tutorials/rotate_dx_certs/#request","title":"Request","text":"GET http://localhost:5000/api/v1/namespaces/default/identities
In the JSON response body, look for the node identity that belongs on this FireFly instance. Here is the node identity from an example stack:
...\n {\n \"id\": \"20da74a2-d4e6-4eaf-8506-e7cd205d8254\",\n \"did\": \"did:firefly:node/node_2b9630\",\n \"type\": \"node\",\n \"parent\": \"41e93d92-d0da-4e5a-9cee-adf33f017a60\",\n \"namespace\": \"default\",\n \"name\": \"node_2b9630\",\n \"profile\": {\n \"cert\": \"-----BEGIN CERTIFICATE-----\\nMIIC1DCCAbwCCQDa9x3wC7wepDANBgkqhkiG9w0BAQsFADAsMRcwFQYDVQQDDA5k\\nYXRhZXhjaGFuZ2VfMDERMA8GA1UECgwIbWVtYmVyXzAwHhcNMjMwMjA2MTQwMTEy\\nWhcNMjQwMjA2MTQwMTEyWjAsMRcwFQYDVQQDDA5kYXRhZXhjaGFuZ2VfMDERMA8G\\nA1UECgwIbWVtYmVyXzAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDJ\\nSgtJw99V7EynvqxWdJkeiUlOg3y+JtJlhxGC//JLp+4sYCtOMriULNf5ouImxniR\\nO2vEd+LNdMuREN4oZdUHtJD4MM7lOFw/0ICNEPJ+oEoUTzOC0OK68sA+OCybeS2L\\nmLBu4yvWDkpufR8bxBJfBGarTAFl36ao1Eoogn4m9gmVrX+V5SOKUhyhlHZFkZNb\\ne0flwQmDMKg6qAbHf3j8cnrrZp26n68IGjwqySPFIRLFSz28zzMYtyzo4b9cF9NW\\nGxusMHsExX5gzlTjNacGx8Tlzwjfolt23D+WHhZX/gekOsFiV78mVjgJanE2ls6D\\n5ZlXi5iQSwm8dlmo9RxFAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAAwr4aAvQnXG\\nkO3xNO+7NGzbb/Nyck5udiQ3RmlZBEJSUsPCsWd4SBhH7LvgbT9ECuAEjgH+2Ip7\\nusd8CROr3sTb9t+7Krk+ljgZirkjq4j/mIRlqHcBJeBtylOz2p0oPsitlI8Yea2D\\nQ4/Xru6txUKNK+Yut3G9qvg/vm9TAwkNHSthzb26bI7s6lx9ZSuFbbG6mR+RQ+8A\\nU4AX1DVo5QyTwSi1lp0+pKFEgtutmWGYn8oT/ya+OLzj+l7Ul4HE/mEAnvECtA7r\\nOC8AEjC5T4gUsLt2IXW9a7lCgovjHjHIySQyqsdYBjkKSn5iw2LRovUWxT1GBvwH\\nFkTvCpHhgko=\\n-----END CERTIFICATE-----\\n\",\n \"endpoint\": \"https://dataexchange_0:3001\",\n \"id\": \"member_0/node_2b9630\"\n },\n \"messages\": {\n \"claim\": \"95da690b-bb05-4873-9478-942f607f363a\",\n \"verification\": null,\n \"update\": null\n },\n \"created\": \"2023-02-06T14:02:50.874319382Z\",\n \"updated\": \"2023-02-06T14:02:50.874319382Z\"\n },\n...\n Copy the UUID from the id field, and add that to the PATCH request. In this case it is 20da74a2-d4e6-4eaf-8506-e7cd205d8254.
Now we will send the new certificate to FireFly. Put the contents of your cert.pem file in the cert field.
NOTE: Usually the cert.pem file will contain line breaks which will not be handled correctly by JSON parsers. Be sure to replace those line breaks with \\n so that the cert field is all on one line as shown below.
PATCH http://localhost:5000/api/v1/namespaces/default/identities/20da74a2-d4e6-4eaf-8506-e7cd205d8254
{\n \"profile\": {\n \"cert\": \"-----BEGIN CERTIFICATE-----\\nMIIC1DCCAbwCCQDeKjPt3siRHzANBgkqhkiG9w0BAQsFADAsMRcwFQYDVQQDDA5k\\nYXRhZXhjaGFuZ2VfMDERMA8GA1UECgwIbWVtYmVyXzAwHhcNMjMwMjA2MTYxNTU3\\nWhcNMjQwMjA2MTYxNTU3WjAsMRcwFQYDVQQDDA5kYXRhZXhjaGFuZ2VfMDERMA8G\\nA1UECgwIbWVtYmVyXzAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCy\\nEJaqDskxhkPHmCqj5Mxq+9QX1ec19fulh9Zvp8dLA6bfeg4fdQ9Ha7APG6w/0K8S\\nEaXOflSpXb0oKMe42amIqwvQaqTOA97HIe5R2HZxA1RWqXf+AueowWgI4crxr2M0\\nZCiXHyiZKpB8nzO+bdO9AKeYnzbhCsO0gq4LPOgpPjYkHPKhabeMVZilZypDVOGk\\nLU+ReQoVEZ+P+t0B/9v+5IQ2yyH41n5dh6lKv4mIaC1OBtLc+Pd6DtbRb7pijkgo\\n+LyqSdl24RHhSgZcTtMQfoRIVzvMkhF5SiJczOC4R8hmt62jtWadO4D5ZtJ7N37/\\noAG/7KJO4HbByVf4xOcDAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAKWbQftV05Fc\\niwVtZpyvP2l4BvKXvMOyg4GKcnBSZol7UwCNrjwYSjqgqyuedTSZXHNhGFxQbfAC\\n94H25bDhWOfd7JH2D7E6RRe3eD9ouDnrt+de7JulsNsFK23IM4Nz5mRhRMVy/5p5\\n9yrsdW+5MXKWgz9569TIjiciCf0JqB7iVPwRrQyz5gqOiPf81PlyaMDeaH9wXtra\\n/1ZRipXiGiNroSPFrQjIVLKWdmnhWKWjFXsiijdSV/5E+8dBb3t//kEZ8UWfBrc4\\nfYVuZ8SJtm2ZzBmit3HFatDlFTE8PanRf/UDALUp4p6YKJ8NE2T8g/uDE0ee1pnF\\nIDsrC1GX7rs=\\n-----END CERTIFICATE-----\\n\",\n \"endpoint\": \"https://dataexchange_0:3001\",\n \"id\": \"member_0\"\n }\n}\n"},{"location":"tutorials/rotate_dx_certs/#response_1","title":"Response","text":"{\n \"id\": \"20da74a2-d4e6-4eaf-8506-e7cd205d8254\",\n \"did\": \"did:firefly:node/node_2b9630\",\n \"type\": \"node\",\n \"parent\": \"41e93d92-d0da-4e5a-9cee-adf33f017a60\",\n \"namespace\": \"default\",\n \"name\": \"node_2b9630\",\n \"profile\": {\n \"cert\": \"-----BEGIN CERTIFICATE-----\\nMIIC1DCCAbwCCQDeKjPt3siRHzANBgkqhkiG9w0BAQsFADAsMRcwFQYDVQQDDA5k\\nYXRhZXhjaGFuZ2VfMDERMA8GA1UECgwIbWVtYmVyXzAwHhcNMjMwMjA2MTYxNTU3\\nWhcNMjQwMjA2MTYxNTU3WjAsMRcwFQYDVQQDDA5kYXRhZXhjaGFuZ2VfMDERMA8G\\nA1UECgwIbWVtYmVyXzAwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCy\\nEJaqDskxhkPHmCqj5Mxq+9QX1ec19fulh9Zvp8dLA6bfeg4fdQ9Ha7APG6w/0K8S\\nEaXOflSpXb0oKMe42amIqwvQaqTOA97HIe5R2HZxA1RWqXf+AueowWgI4crxr2M0\\nZCiXHyiZKpB8nzO+bdO9AKeYnzbhCsO0gq4LPOgpPjYkHPKhabeMVZilZypDVOGk\\nLU+ReQoVEZ+P+t0B/9v+5IQ2yyH41n5dh6lKv4mIaC1OBtLc+Pd6DtbRb7pijkgo\\n+LyqSdl24RHhSgZcTtMQfoRIVzvMkhF5SiJczOC4R8hmt62jtWadO4D5ZtJ7N37/\\noAG/7KJO4HbByVf4xOcDAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAKWbQftV05Fc\\niwVtZpyvP2l4BvKXvMOyg4GKcnBSZol7UwCNrjwYSjqgqyuedTSZXHNhGFxQbfAC\\n94H25bDhWOfd7JH2D7E6RRe3eD9ouDnrt+de7JulsNsFK23IM4Nz5mRhRMVy/5p5\\n9yrsdW+5MXKWgz9569TIjiciCf0JqB7iVPwRrQyz5gqOiPf81PlyaMDeaH9wXtra\\n/1ZRipXiGiNroSPFrQjIVLKWdmnhWKWjFXsiijdSV/5E+8dBb3t//kEZ8UWfBrc4\\nfYVuZ8SJtm2ZzBmit3HFatDlFTE8PanRf/UDALUp4p6YKJ8NE2T8g/uDE0ee1pnF\\nIDsrC1GX7rs=\\n-----END CERTIFICATE-----\\n\",\n \"endpoint\": \"https://dataexchange_0:3001\",\n \"id\": \"member_0\"\n },\n \"messages\": {\n \"claim\": \"95da690b-bb05-4873-9478-942f607f363a\",\n \"verification\": null,\n \"update\": \"5782cd7c-7643-4d7f-811b-02765a7aaec5\"\n },\n \"created\": \"2023-02-06T14:02:50.874319382Z\",\n \"updated\": \"2023-02-06T14:02:50.874319382Z\"\n}\n Repeat these requests for the second member/node running on port 5001. After that you should be back up and running with your new certs, and you should be able to send private messages again.
Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the Arbitrum Nitro Goerli Rollup Testnet.
"},{"location":"tutorials/chains/arbitrum/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/arbitrum/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Binance Smart Chain testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Arbitrum testnet, we will use command line flags to customize the following settings:
arbitrum with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here421613 (the correct ID for the Binance Smart Chain testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum arbitrum 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 421613 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/arbitrum/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start arbitrum\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs arbitrum\n"},{"location":"tutorials/chains/arbitrum/#get-some-aribitrum","title":"Get some Aribitrum","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list arbitrum\n[\n {\n \"address\": \"0x225764d1be1f137be23ddfc426b819512b5d0f3e\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Next, check out this article https://medium.com/offchainlabs/new-g%C3%B6rli-testnet-and-getting-rinkeby-ready-for-nitro-3ff590448053 and follow the instructions to send a tweet to the developers. Make sure to change the address to the one in the CLI.
"},{"location":"tutorials/chains/arbitrum/#confirm-the-transaction-on-bscscan","title":"Confirm the transaction on Bscscan","text":"You should be able to go lookup your account on https://goerli-rollup-explorer.arbitrum.io/ and see that you now have a balance of 0.001 ether. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/arbitrum/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Binance Smart Chain, please see the Arbitrum docs for instructions using various tools.
"},{"location":"tutorials/chains/avalanche/","title":"Avalanche","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the Avalanche C-Chain Fuji testnet.
"},{"location":"tutorials/chains/avalanche/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/avalanche/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Avalanche testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Avalanche Fuji testnet, we will use command line flags to customize the following settings:
avalanche with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here43113 (the correct ID for the Avalanche Fuji testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum avalanche 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 43113 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/avalanche/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start avalanche\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs avalanche\n"},{"location":"tutorials/chains/avalanche/#get-some-avax","title":"Get some AVAX","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some AVAX, the native token for Avalanche.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list avalanche\n[\n {\n \"address\": \"0x6688e14f719766cc2a5856ccef63b069703d86f7\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://faucet.avax.network/ and paste the address in the form. Make sure that the network you select is Fuji (C-Chain). Click the Request 2 AVAX button.
"},{"location":"tutorials/chains/avalanche/#confirm-the-transaction-on-snowtrace","title":"Confirm the transaction on Snowtrace","text":"You should be able to go lookup your account on Snowtrace for the Fuji testnet and see that you now have a balance of 2 AVAX. Simply paste in your account address or transaction ID to search for it.
"},{"location":"tutorials/chains/avalanche/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Avalanche, please see the Avalanche docs for instructions using various tools.
"},{"location":"tutorials/chains/binance_smart_chain/","title":"Binance Smart Chain","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Binance Smart Chain testnet.
"},{"location":"tutorials/chains/binance_smart_chain/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/binance_smart_chain/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Binance Smart Chain testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Binance Smart Chain testnet, we will use command line flags to customize the following settings:
bsc with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here97 (the correct ID for the Binance Smart Chain testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum bsc 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 97 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/binance_smart_chain/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start bsc\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs bsc\n"},{"location":"tutorials/chains/binance_smart_chain/#get-some-bnb","title":"Get some BNB","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some BNB, the native token for Binance Smart Chain.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list bsc\n[\n {\n \"address\": \"0x235461d246ab95d367925b4e91bd2755a921fdd8\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://testnet.binance.org/faucet-smart and paste the address in the form. Go through the CAPTCH form and click the Give me BNB button.
"},{"location":"tutorials/chains/binance_smart_chain/#confirm-the-transaction-on-bscscan","title":"Confirm the transaction on Bscscan","text":"You should be able to go lookup your account on Bscscan for the testnet https://testnet.bscscan.com/ and see that you now have a balance of 0.5 BNB. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/binance_smart_chain/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Binance Smart Chain, please see the Binance docs for instructions using various tools.
"},{"location":"tutorials/chains/fabric_test_network/","title":"Work with Fabric-Samples Test Network","text":"This guide will walk you through the steps to create a local FireFly development environment and connect it to the Fabric Test Network from the Fabric Samples repo
"},{"location":"tutorials/chains/fabric_test_network/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/fabric_test_network/#start-fabric-test-network-with-fabric-ca","title":"Start Fabric Test Network with Fabric CA","text":"For details about the Fabric Test Network and how to set it up, please see the Fabric Samples repo. The one important detail is that you need to start up the Test Network with a Fabric CA. This is because Fabconnect will use the Fabric CA to create an identity for its FireFly node to use. To start up the network with the CA, and create a new channel called mychannel run:
./network.sh up createChannel -ca\n NOTE: If you already have the Test Network running, you will need to bring it down first, by running: ./network.sh down
Next we will need to package and deploy the FireFly chaincode to mychannel in our new network. For more details on packaging and deploying chaincode, please see the Fabric chaincode lifecycle documentation. If you already have the FireFly repo cloned in the same directory as your fabric-samples repo, you can run the following script from your test-network directory:
NOTE: This script is provided as a convenience only, and you are not required to use it. You are welcome to package and deploy the chaincode to your test-network any way you would like.
#!/bin/bash\n\n# This file should be run from the test-network directory in the fabric-samples repo\n# It also assumes that you have the firefly repo checked out at the same level as the fabric-samples directory\n# It also assumes that the test-network is up and running and a channel named 'mychannel' has already been created\n\ncd ../../firefly/smart_contracts/fabric/firefly-go\nGO111MODULE=on go mod vendor\ncd ../../../../fabric-samples/test-network\n\nexport PATH=${PWD}/../bin:$PATH\nexport FABRIC_CFG_PATH=$PWD/../config/\n\npeer lifecycle chaincode package firefly.tar.gz --path ../../firefly/smart_contracts/fabric/firefly-go --lang golang --label firefly_1.0\n\nexport CORE_PEER_TLS_ENABLED=true\nexport CORE_PEER_LOCALMSPID=\"Org1MSP\"\nexport CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt\nexport CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp\nexport CORE_PEER_ADDRESS=localhost:7051\n\npeer lifecycle chaincode install firefly.tar.gz\n\nexport CORE_PEER_LOCALMSPID=\"Org2MSP\"\nexport CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt\nexport CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp\nexport CORE_PEER_ADDRESS=localhost:9051\n\npeer lifecycle chaincode install firefly.tar.gz\n\nexport CC_PACKAGE_ID=$(peer lifecycle chaincode queryinstalled --output json | jq --raw-output \".installed_chaincodes[0].package_id\")\n\npeer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --channelID mychannel --name firefly --version 1.0 --package-id $CC_PACKAGE_ID --sequence 1 --tls --cafile \"${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem\"\n\nexport CORE_PEER_LOCALMSPID=\"Org1MSP\"\nexport CORE_PEER_MSPCONFIGPATH=${PWD}/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp\nexport CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt\nexport CORE_PEER_ADDRESS=localhost:7051\n\npeer lifecycle chaincode approveformyorg -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --channelID mychannel --name firefly --version 1.0 --package-id $CC_PACKAGE_ID --sequence 1 --tls --cafile \"${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem\"\n\npeer lifecycle chaincode commit -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --channelID mychannel --name firefly --version 1.0 --sequence 1 --tls --cafile \"${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem\" --peerAddresses localhost:7051 --tlsRootCertFiles \"${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt\" --peerAddresses localhost:9051 --tlsRootCertFiles \"${PWD}/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt\"\n"},{"location":"tutorials/chains/fabric_test_network/#create-ccpyml-documents","title":"Create ccp.yml documents","text":"Each FireFly Supernode (specifically the Fabconnect instance in each) will need to know how to connect to the Fabric network. Fabconnect will use a Fabric Connection Profile which describes the network and tells it where the certs and keys are that it needs. Below is a ccp.yml for each organization. You will need to fill in one line by replacing the string FILL_IN_KEY_NAME_HERE, because the file name of the private key for each user is randomly generated.
Create a new file at ~/org1_ccp.yml with the contents below. Replace the string FILL_IN_KEY_NAME_HERE with the filename in your fabric-samples/test-network/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore directory.
certificateAuthorities:\n org1.example.com:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp/tlscacerts/ca.crt\n url: https://ca_org1:7054\n grpcOptions:\n ssl-target-name-override: org1.example.com\n registrar:\n enrollId: admin\n enrollSecret: adminpw\nchannels:\n mychannel:\n orderers:\n - fabric_orderer\n peers:\n fabric_peer:\n chaincodeQuery: true\n endorsingPeer: true\n eventSource: true\n ledgerQuery: true\nclient:\n BCCSP:\n security:\n default:\n provider: SW\n enabled: true\n hashAlgorithm: SHA2\n level: 256\n softVerify: true\n credentialStore:\n cryptoStore:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp\n cryptoconfig:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/msp\n logging:\n level: info\n organization: org1.example.com\n tlsCerts:\n client:\n cert:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/signcerts/cert.pem\n key:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore/FILL_IN_KEY_NAME_HERE\norderers:\n fabric_orderer:\n tlsCACerts:\n path: /etc/firefly/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/tlscacerts/tls-localhost-9054-ca-orderer.pem\n url: grpcs://orderer.example.com:7050\norganizations:\n org1.example.com:\n certificateAuthorities:\n - org1.example.com\n cryptoPath: /tmp/msp\n mspid: Org1MSP\n peers:\n - fabric_peer\npeers:\n fabric_peer:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/tlscacerts/tls-localhost-7054-ca-org1.pem\n url: grpcs://peer0.org1.example.com:7051\nversion: 1.1.0%\n"},{"location":"tutorials/chains/fabric_test_network/#organization-2-connection-profile","title":"Organization 2 connection profile","text":"Create a new file at ~/org2_ccp.yml with the contents below. Replace the string FILL_IN_KEY_NAME_HERE with the filename in your fabric-samples/test-network/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/keystore directory.
certificateAuthorities:\n org2.example.com:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp/tlscacerts/ca.crt\n url: https://ca_org2:8054\n grpcOptions:\n ssl-target-name-override: org2.example.com\n registrar:\n enrollId: admin\n enrollSecret: adminpw\nchannels:\n mychannel:\n orderers:\n - fabric_orderer\n peers:\n fabric_peer:\n chaincodeQuery: true\n endorsingPeer: true\n eventSource: true\n ledgerQuery: true\nclient:\n BCCSP:\n security:\n default:\n provider: SW\n enabled: true\n hashAlgorithm: SHA2\n level: 256\n softVerify: true\n credentialStore:\n cryptoStore:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp\n cryptoconfig:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/msp\n logging:\n level: info\n organization: org2.example.com\n tlsCerts:\n client:\n cert:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/signcerts/cert.pem\n key:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/keystore/FILL_IN_KEY_NAME_HERE\norderers:\n fabric_orderer:\n tlsCACerts:\n path: /etc/firefly/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/tls/tlscacerts/tls-localhost-9054-ca-orderer.pem\n url: grpcs://orderer.example.com:7050\norganizations:\n org2.example.com:\n certificateAuthorities:\n - org2.example.com\n cryptoPath: /tmp/msp\n mspid: Org2MSP\n peers:\n - fabric_peer\npeers:\n fabric_peer:\n tlsCACerts:\n path: /etc/firefly/organizations/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/tlscacerts/tls-localhost-8054-ca-org2.pem\n url: grpcs://peer0.org2.example.com:9051\nversion: 1.1.0%\n"},{"location":"tutorials/chains/fabric_test_network/#create-the-firefly-stack","title":"Create the FireFly stack","text":"Now we can create a FireFly stack and pass in these files as command line flags.
NOTE: The following command should be run in the test-network directory as it includes a relative path to the organizations directory containing each org's MSP.
ff init fabric dev \\\n --ccp \"${HOME}/org1_ccp.yml\" \\\n --msp \"organizations\" \\\n --ccp \"${HOME}/org2_ccp.yml\" \\\n --msp \"organizations\" \\\n --channel mychannel \\\n --chaincode firefly\n"},{"location":"tutorials/chains/fabric_test_network/#edit-docker-composeoverrideyml","title":"Edit docker-compose.override.yml","text":"The last step before starting up FireFly is to make sure that our FireFly containers have networking access to the Fabric containers. Because these are in two different Docker Compose networks by default, normally the containers would not be able to connect directly. We can fix this by instructing Docker to also attach our FireFly containers to the Fabric test network Docker Compose network. The easiest way to do that is to edit ~/.firefly/stacks/dev/docker-compose.override.yml and set its contents to the following:
# Add custom config overrides here\n# See https://docs.docker.com/compose/extends\nversion: \"2.1\"\nnetworks:\n default:\n name: fabric_test\n external: true\n"},{"location":"tutorials/chains/fabric_test_network/#start-firefly-stack","title":"Start FireFly stack","text":"Now we can start up FireFly!
ff start dev\n After everything starts up, you should have two FireFly nodes that are each mapped to an Organization in your Fabric network. You can that they each use separate signing keys for their Org on messages that each FireFly node sends.
"},{"location":"tutorials/chains/fabric_test_network/#connecting-to-a-remote-fabric-network","title":"Connecting to a remote Fabric Network","text":"This same guide can be adapted to connect to a remote Fabric network running somewhere else. They key takeaways are:
ff initff initThere are quite a few moving parts in this guide and if steps are missed or done out of order it can cause problems. Below are some of the common situations that you might run into while following this guide, and solutions for each.
You may see a message something along the lines of:
ERROR: for firefly_core_0 Container \"bc04521372aa\" is unhealthy.\nEncountered errors while bringing up the project.\n In this case, we need to look at the container logs to get more detail about what happened. To do this, we can run ff start and tell it not to clean up the stack after the failure, to let you inspect what went wrong. To do that, you can run:
ff start dev --verbose --no-rollback\n Then we could run docker logs <container_name> to see the logs for that container.
Error: http://127.0.0.1:5102/identities [500] {\"error\":\"enroll failed: enroll failed: POST failure of request: POST https://ca_org1:7054/enroll\\n{\\\"hosts\\\":null,\\\"certificate_request\\\":\\\"-----BEGIN CERTIFICATE REQUEST-----\\\\nMIH0MIGcAgEAMBAxDjAMBgNVBAMTBWFkbWluMFkwEwYHKoZIzj0CAQYIKoZIzj0D\\\\nAQcDQgAE7qJZ5nGt/kxU9IvrEb7EmgNIgn9xXoQUJLl1+U9nXdWB9cnxcmoitnvy\\\\nYN63kbBuUh0z21vOmO8GLD3QxaRaD6AqMCgGCSqGSIb3DQEJDjEbMBkwFwYDVR0R\\\\nBBAwDoIMMGQ4NGJhZWIwZGY0MAoGCCqGSM49BAMCA0cAMEQCIBcWb127dVxm/80K\\\\nB2LtenAY/Jtb2FbZczolrXNCKq+LAiAcGEJ6Mx8LVaPzuSP4uGpEoty6+bEErc5r\\\\nHVER+0aXiQ==\\\\n-----END CERTIFICATE REQUEST-----\\\\n\\\",\\\"profile\\\":\\\"\\\",\\\"crl_override\\\":\\\"\\\",\\\"label\\\":\\\"\\\",\\\"NotBefore\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"NotAfter\\\":\\\"0001-01-01T00:00:00Z\\\",\\\"ReturnPrecert\\\":false,\\\"CAName\\\":\\\"\\\"}: Post \\\"https://ca_org1:7054/enroll\\\": dial tcp: lookup ca_org1 on 127.0.0.11:53: no such host\"}\n If you see something in your logs that looks like the above, there could be a couple issues:
ccp.yml. Check the ccp.yml for that member and make sure the hostnames are correct.docker-compose.override.yml file to make sure you added the fabric_test network as instructed above.User credentials store creation failed. Failed to load identity configurations: failed to create identity config from backends: failed to load client TLSConfig : failed to load client key: failed to load pem bytes from path /etc/firefly/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/cfc50311e2204f232cfdfaf4eba7731279f2366ec291ca1c1781e2bf7bc75529_sk: open /etc/firefly/organizations/peerOrganizations/org1.example.com/users/User1@org1.example.com/msp/keystore/cfc50311e2204f232cfdfaf4eba7731279f2366ec291ca1c1781e2bf7bc75529_sk: no such file or directory\n If you see something in your logs that looks like the above, it's likely that your private key file name is not correct in your ccp.yml file for that particular member. Check your ccp.yml and make sure all the files listed there exist in your organizations directory.
Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Moonbeam Alpha testnet.
"},{"location":"tutorials/chains/moonbeam/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/moonbeam/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Moonbeam testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Moonbeam Alpha testnet, we will use command line flags to customize the following settings:
moonbeam with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here1287 (the correct ID for the Moonbeam Alpha testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum moonbeam 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 1287 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/moonbeam/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start moonbeam\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs moonbeam\n"},{"location":"tutorials/chains/moonbeam/#get-some-dev","title":"Get some DEV","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some DEV, the native token for Moonbeam.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list moonbeam\n[\n {\n \"address\": \"0x02d42c32a97c894486afbc7b717edff50c70b292\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://apps.moonbeam.network/moonbase-alpha/faucet/ and paste the address in the form. Click the Submit button.
"},{"location":"tutorials/chains/moonbeam/#confirm-the-transaction-on-moonscan","title":"Confirm the transaction on Moonscan","text":"You should be able to go lookup your account on Moonscan for the Moonbase Alpha testnet and see that you now have a sufficient balance of DEV. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/moonbeam/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on interacting with the Moonbeam Alpha testnet, please see the Moonbeam docs.
"},{"location":"tutorials/chains/optimism/","title":"Optimism","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the Optimism Goerli testnet.
"},{"location":"tutorials/chains/optimism/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/optimism/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Optimism testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Optimism testnet, we will use command line flags to customize the following settings:
optimism with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here420 (the correct ID for the Optimism testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum optimism 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 420 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/optimism/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start optimism\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs optimism\n"},{"location":"tutorials/chains/optimism/#get-some-optimism","title":"Get some Optimism","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some OP, the native token for Optimism.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list optimism\n[\n {\n \"address\": \"0x235461d246ab95d367925b4e91bd2755a921fdd8\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://optimismfaucet.xyz/. You will need to login to your Github account and paste the address in the form.
"},{"location":"tutorials/chains/optimism/#confirm-the-transaction-on-blockcscout","title":"Confirm the transaction on Blockcscout","text":"You should be able to go lookup your account on Blockscout for Optimism testnet https://blockscout.com/optimism/goerli and see that you now have a balance of 100 OP. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/optimism/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Optimism, please see the Optimism docs for instructions using various tools.
"},{"location":"tutorials/chains/polygon_testnet/","title":"Polygon Testnet","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Polygon Mumbai testnet.
"},{"location":"tutorials/chains/polygon_testnet/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/polygon_testnet/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the Polygon testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For more info about confirmations, see Public vs. Permissioned
For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the Polygon Mumbai testnet, we will use command line flags to customize the following settings:
polygon with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium here80001 (the correct ID for the Polygon Mumbai testnet)evmconnect config fileTo do this, run the following command:
ff init ethereum polygon 1 \\\n --multiparty=false \\\n -n remote-rpc \\\n --remote-node-url <selected RPC endpoint> \\\n --chain-id 80001 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/polygon_testnet/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start polygon\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs polygon\n"},{"location":"tutorials/chains/polygon_testnet/#get-some-matic","title":"Get some MATIC","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. A testnet faucet can give us some MATIC, the native token for Polygon.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list polygon\n[\n {\n \"address\": \"0x02d42c32a97c894486afbc7b717edff50c70b292\",\n \"privateKey\": \"...\"\n }\n]\n Copy the address listed in the output from this command. Go to https://faucet.polygon.technology/ and paste the address in the form. Click the Submit button, and then Confirm.
"},{"location":"tutorials/chains/polygon_testnet/#confirm-the-transaction-on-polygonscan","title":"Confirm the transaction on Polygonscan","text":"You should be able to go lookup your account on Polygonscan for the Mumbai testnet and see that you now have a balance of 0.2 MATIC. Simply paste in your account address to search for it.
You can also click on the Internal Txns tab from you account page to see the actual transfer of the MATIC from the faucet.
"},{"location":"tutorials/chains/polygon_testnet/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Polygon, please see the Polygon docs for instructions using various tools.
"},{"location":"tutorials/chains/tezos_testnet/","title":"Tezos Testnet","text":"This guide will walk you through the steps to create a local FireFly development environment and connect it to the public Tezos Ghostnet testnet.
"},{"location":"tutorials/chains/tezos_testnet/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/tezos_testnet/#set-up-the-transaction-signing-service","title":"Set up the transaction signing service","text":"Signatory service allows to work with many different key-management systems.\\ By default, FF uses local signing option.\\ However, it is also possible to configure the transaction signing service using key management systems such as: AWS/Google/Azure KMS, HCP Vault, etc.
NOTE: The default option is not secure and is mainly used for development and demo purposes. Therefore, for the production, use the selected KMS.\\ The full list can be found here.
"},{"location":"tutorials/chains/tezos_testnet/#creating-a-new-stack","title":"Creating a new stack","text":"To create a local FireFly development stack and connect it to the Tezos Ghostnet testnet, we will use command line flags to customize the following settings:
tezos with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium hereTo do this, run the following command:
ff init tezos dev 1 \\\n --multiparty=false \\\n --remote-node-url <selected RPC endpoint>\n NOTE: The public RPC nodes may have limitations or may not support all FF required RPC endpoints. Therefore it's not recommended to use ones for production and you may need to run own node or use third-party vendors.
"},{"location":"tutorials/chains/tezos_testnet/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start dev\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs dev\n"},{"location":"tutorials/chains/tezos_testnet/#get-some-xtz","title":"Get some XTZ","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay transaction fee. A testnet faucet can give us some XTZ, the native token for Tezos.
First, you need to get an account address, which was created during signer set up step.\\ To check that, you can run:
ff accounts list dev\n[\n {\n \"address\": \"tz1cuFw1E2Mn2bVS8q8d7QoCb6FXC18JivSp\",\n \"privateKey\": \"...\"\n }\n]\n After that, go to Tezos Ghostnet Faucet and paste the address in the form and click the Request button.
"},{"location":"tutorials/chains/tezos_testnet/#confirm-the-transaction-on-tzstats","title":"Confirm the transaction on TzStats","text":"You should be able to go lookup your account on TzStats for the Ghostnet testnet and see that you now have a balance of 100 XTZ (or 2001 XTZ accordingly). Simply paste in your account address to search for it.
On the Transfers tab from you account page you will see the actual transfer of the XTZ from the faucet.
"},{"location":"tutorials/chains/tezos_testnet/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to Tezos, please see the Tezos docs for instructions using various tools.
"},{"location":"tutorials/chains/zksync_testnet/","title":"zkSync Testnet","text":"Starting with FireFly v1.1, it's easy to connect to public Ethereum chains. This guide will walk you through the steps to create a local FireFly development environment and connect it to the zkSync testnet.
"},{"location":"tutorials/chains/zksync_testnet/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/chains/zksync_testnet/#create-an-evmconnectyml-config-file","title":"Create anevmconnect.yml config file","text":"In order to connect to the zkSync testnet, you will need to set a few configuration options for the evmconnect blockchain connector. Create a text file called evmconnect.yml with the following contents:
confirmations:\n required: 4 # choose the number of confirmations you require\npolicyengine.simple:\n fixedGasPrice: null\n gasOracle:\n mode: connector\n For this tutorial, we will assume this file is saved at ~/Desktop/evmconnect.yml. If your path is different, you will need to adjust the path in the next command below.
To create a local FireFly development stack and connect it to the zkSync testnet, we will use command line flags to customize the following settings:
zkSync with 1 membermultiparty mode. We are going to be using this FireFly node as a Web3 gateway, and we don't need to communicate with a consortium hereevmconnect blockchain connectorhttps://zksync2-testnet.zksync.dev280 (the correct ID for the zkSync testnet)evmconnect config fileTo do this, run the following command:
ff init zksync 1\\\n --multiparty=false \\\n -b ethereum \\\n -c evmconnect \\\n -n remote-rpc \\\n --remote-node-url https://zksync2-testnet.zksync.dev\\\n --chain-id 280 \\\n --connector-config ~/Desktop/evmconnect.yml\n"},{"location":"tutorials/chains/zksync_testnet/#start-the-stack","title":"Start the stack","text":"Now you should be able to start your stack by running:
ff start zksync\n After some time it should print out the following:
Web UI for member '0': http://127.0.0.1:5000/ui\nSandbox UI for member '0': http://127.0.0.1:5109\n\n\nTo see logs for your stack run:\n\nff logs zksync\n"},{"location":"tutorials/chains/zksync_testnet/#get-some-eth","title":"Get some ETH","text":"At this point you should have a working FireFly stack, talking to a public chain. However, you won't be able to run any transactions just yet, because you don't have any way to pay for gas. zkSync does not currently have its own native token and instead uses Ethereum for transaction. A testnet faucet can give us some ETH.
First, you will need to know what signing address your FireFly node is using. To check that, you can run:
ff accounts list zkSync\n[\n {\n \"address\": \"0x8cf4fd38b2d56a905113d23b5a7131f0269d8611\",\n \"privateKey\": \"...\"\n }\n]\n Copy your zkSync address and go to the Goerli Ethereum faucet and paste the address in the form. Click the Request Tokens button. Note that any Goerli Ethereum faucet will work.
"},{"location":"tutorials/chains/zksync_testnet/#confirm-the-transaction-on-the-etherscan-explorer","title":"Confirm the transaction on the Etherscan Explorer","text":"You should be able to go lookup your account at https://etherscan.io/ and see that you now have a balance of 0.025 ETH. Simply paste in your account address to search for it.
"},{"location":"tutorials/chains/zksync_testnet/#use-the-public-testnet","title":"Use the public testnet","text":"Now that you have everything set up, you can follow one of the other FireFly guides such as Using Tokens or Custom Smart Contracts. For detailed instructions on deploying a custom smart contract to zkSync, please see the zkSync docs for instructions using various tools.
"},{"location":"tutorials/custom_contracts/","title":"Work with custom smart contracts","text":""},{"location":"tutorials/custom_contracts/#quick-reference","title":"Quick reference","text":"Almost all blockchain platforms offer the ability to execute smart contracts on-chain in order to manage states on the shared ledger. FireFly provides support to use RESTful APIs to interact with the smart contracts deployed in the target blockchains, and listening to events via websocket.
FireFly's unified API creates a consistent application experience regardless of the specific underlying blockchain implementation. It also provides developer-friendly features like automatic OpenAPI Specification generation for smart contracts, plus a built-in Swagger UI.
"},{"location":"tutorials/custom_contracts/#key-concepts","title":"Key concepts","text":"FireFly defines the following constructs to support custom smart contracts:
"},{"location":"tutorials/custom_contracts/#contract-interface","title":"Contract Interface","text":"FireFly defines a common, blockchain agnostic way to describe smart contracts. This is referred to as a Contract Interface. A contract interface is written in the FireFly Interface (FFI) format. It is a simple JSON document that has a name, a namespace, a version, a list of methods, and a list of events.
For more details, you can also have a look at the Reference page for the FireFly Interface Format.
For blockchains that offer a DSL describing the smart contract interface, such as Ethereum's ABI (Application Binary Interface), FireFly offers an API to convert the DSL into the FFI format.
NOTE: Contract interfaces are scoped to a namespace. Within a namespace each contract interface must have a unique name and version combination. The same name and version combination can exist in different namespaces simultaneously.
"},{"location":"tutorials/custom_contracts/#http-api","title":"HTTP API","text":"Based on a Contract Interface, FireFly further defines an HTTP API for the smart contract, which is complete with an OpenAPI Specification and the Swagger UI. An HTTP API defines an /invoke root path to submit transactions, and a /query root path to send query requests to read the state back out.
How the invoke vs. query requests get interpreted into the native blockchain requests are specific to the blockchain's connector. For instance, the Ethereum connector translates /invoke calls to eth_sendTransaction JSON-RPC requests, while /query calls are translated into eth_call JSON-RPC requests. On the other hand, the Fabric connector translates /invoke calls to the multiple requests required to submit a transaction to a Fabric channel (which first collects endorsements from peer nodes, and then sends the assembled transaction payload to an orderer, for details please refer to the Fabric documentation).
Regardless of a blockchain's specific design, transaction processing are always asynchronous. This means a transaction is submitted to the network, at which point the submitting client gets an acknowledgement that it has been accepted for further processing. The client then listens for notifications by the blockchain when the transaction gets committed to the blockchain's ledger.
FireFly defines event listeners to allow the client application to specify the relevant blockchain events to keep track of. A client application can then receive the notifications from FireFly via an event subscription.
"},{"location":"tutorials/custom_contracts/#event-subscription","title":"Event Subscription","text":"An event listener in FireFly tracks specific blockchain events, while an event subscription directs FireFly to send those events to the client application. Each subscription creates a stream of events that can be delivered to the client with various delivery options, ensuring an at-least-once delivery guarantee.
This is exactly the same as listening for any other events from FireFly. For more details on how Subscriptions work in FireFly you can read the Getting Started guide to Listen for events.
"},{"location":"tutorials/custom_contracts/#custom-onchain-logic-async-programming-in-firefly","title":"Custom onchain logic async programming in FireFly","text":"Like the rest of FireFly, custom onchain logic support are implemented with an asynchronous programming model. The key concepts here are:
blockchain_event_received when this happens.This guide describes the steps to deploy a smart contract to an Ethereum blockchain and use FireFly to interact with it in order to submit transactions, query for states and listening for events.
NOTE: This guide assumes that you are running a local FireFly stack with at least 2 members and an Ethereum blockchain created by the FireFly CLI. If you need help getting that set up, please see the Getting Started guide to Start your environment.
"},{"location":"tutorials/custom_contracts/ethereum/#example-smart-contract","title":"Example smart contract","text":"For this tutorial, we will be using a well known, but slightly modified smart contract called SimpleStorage, and will be using this contract on an Ethereum blockchain. As the name implies, it's a very simple contract which stores an unsigned 256 bit integer, emits and event when the value is updated, and allows you to retrieve the current value.
Here is the source for this contract:
// SPDX-License-Identifier: Apache-2.0\npragma solidity ^0.8.10;\n\n// Declares a new contract\ncontract SimpleStorage {\n // Storage. Persists in between transactions\n uint256 x;\n\n // Allows the unsigned integer stored to be changed\n function set(uint256 newValue) public {\n x = newValue;\n emit Changed(msg.sender, newValue);\n }\n\n // Returns the currently stored unsigned integer\n function get() public view returns (uint256) {\n return x;\n }\n\n event Changed(address indexed from, uint256 value);\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#contract-deployment","title":"Contract deployment","text":"If you need to deploy an Ethereum smart contract with a signing key that FireFly will use for submitting future transactions it is recommended to use FireFly's built in contract deployment API. This is useful in many cases. For example, you may want to deploy a token contract and have FireFly mint some tokens. Many token contracts only allow the contract deployer to mint, so the contract would need to be deployed with a FireFly signing key.
You will need compile the contract yourself using solc or some other tool. After you have compiled the contract, look in the JSON output file for the fields to build the request below.
"},{"location":"tutorials/custom_contracts/ethereum/#request","title":"Request","text":"Field Descriptionkey The signing key to use to dpeloy the contract. If omitted, the namespaces's default signing key will be used. contract The compiled bytecode for your smart contract. It should be either a hex encded string or Base64. definition The full ABI JSON array from your compiled JSON file. Copy the entire value of the abi field from the [ to the ]. input An ordered list of constructor arguments. Some contracts may not require any (such as this example). POST http://localhost:5000/api/v1/namespaces/default/contracts/deploy
{\n \"contract\": \"608060405234801561001057600080fd5b5061019e806100206000396000f3fe608060405234801561001057600080fd5b50600436106100365760003560e01c806360fe47b11461003b5780636d4ce63c14610057575b600080fd5b61005560048036038101906100509190610111565b610075565b005b61005f6100cd565b60405161006c919061014d565b60405180910390f35b806000819055503373ffffffffffffffffffffffffffffffffffffffff167fb52dda022b6c1a1f40905a85f257f689aa5d69d850e49cf939d688fbe5af5946826040516100c2919061014d565b60405180910390a250565b60008054905090565b600080fd5b6000819050919050565b6100ee816100db565b81146100f957600080fd5b50565b60008135905061010b816100e5565b92915050565b600060208284031215610127576101266100d6565b5b6000610135848285016100fc565b91505092915050565b610147816100db565b82525050565b6000602082019050610162600083018461013e565b9291505056fea2646970667358221220e6cbd7725b98b234d07bc1823b60ac065b567c6645d15c8f8f6986e5fa5317c664736f6c634300080b0033\",\n \"definition\": [\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n },\n {\n \"indexed\": false,\n \"internalType\": \"uint256\",\n \"name\": \"value\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"Changed\",\n \"type\": \"event\"\n },\n {\n \"inputs\": [],\n \"name\": \"get\",\n \"outputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n }\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"newValue\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"set\",\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n }\n ],\n \"input\": []\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response","title":"Response","text":"{\n \"id\": \"aa155a3c-2591-410e-bc9d-68ae7de34689\",\n \"namespace\": \"default\",\n \"tx\": \"4712ffb3-cc1a-4a91-aef2-206ac068ba6f\",\n \"type\": \"blockchain_deploy\",\n \"status\": \"Succeeded\",\n \"plugin\": \"ethereum\",\n \"input\": {\n \"contract\": \"608060405234801561001057600080fd5b5061019e806100206000396000f3fe608060405234801561001057600080fd5b50600436106100365760003560e01c806360fe47b11461003b5780636d4ce63c14610057575b600080fd5b61005560048036038101906100509190610111565b610075565b005b61005f6100cd565b60405161006c919061014d565b60405180910390f35b806000819055503373ffffffffffffffffffffffffffffffffffffffff167fb52dda022b6c1a1f40905a85f257f689aa5d69d850e49cf939d688fbe5af5946826040516100c2919061014d565b60405180910390a250565b60008054905090565b600080fd5b6000819050919050565b6100ee816100db565b81146100f957600080fd5b50565b60008135905061010b816100e5565b92915050565b600060208284031215610127576101266100d6565b5b6000610135848285016100fc565b91505092915050565b610147816100db565b82525050565b6000602082019050610162600083018461013e565b9291505056fea2646970667358221220e6cbd7725b98b234d07bc1823b60ac065b567c6645d15c8f8f6986e5fa5317c664736f6c634300080b0033\",\n \"definition\": [\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n },\n {\n \"indexed\": false,\n \"internalType\": \"uint256\",\n \"name\": \"value\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"Changed\",\n \"type\": \"event\"\n },\n {\n \"inputs\": [],\n \"name\": \"get\",\n \"outputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n }\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"newValue\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"set\",\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n }\n ],\n \"input\": [],\n \"key\": \"0xddd93a452bfc8d3e62bbc60c243046e4d0cb971b\",\n \"options\": null\n },\n \"output\": {\n \"headers\": {\n \"requestId\": \"default:aa155a3c-2591-410e-bc9d-68ae7de34689\",\n \"type\": \"TransactionSuccess\"\n },\n \"contractLocation\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"protocolId\": \"000000000024/000000\",\n \"transactionHash\": \"0x32d1144091877266d7f0426e48db157e7d1a857c62e6f488319bb09243f0f851\"\n },\n \"created\": \"2023-02-03T15:42:52.750277Z\",\n \"updated\": \"2023-02-03T15:42:52.750277Z\"\n}\n Here we can see in the response above under the output section that our new contract address is 0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1. This is the address that we will reference in the rest of this guide.
If you have an Ethereum ABI for an existing smart contract, there is an HTTP endpoint on the FireFly API that will take the ABI as input and automatically generate the FireFly Interface for you. Rather than handcrafting our FFI, we'll let FireFly generate it for us using that endpoint now.
"},{"location":"tutorials/custom_contracts/ethereum/#request_1","title":"Request","text":"Here we will take the JSON ABI generated by truffle or solc and POST that to FireFly to have it automatically generate the FireFly Interface for us. Copy the abi from the compiled JSON file, and put that inside an input object like the example below:
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces/generate
{\n \"input\": {\n \"abi\": [\n {\n \"anonymous\": false,\n \"inputs\": [\n {\n \"indexed\": true,\n \"internalType\": \"address\",\n \"name\": \"from\",\n \"type\": \"address\"\n },\n {\n \"indexed\": false,\n \"internalType\": \"uint256\",\n \"name\": \"value\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"Changed\",\n \"type\": \"event\"\n },\n {\n \"inputs\": [],\n \"name\": \"get\",\n \"outputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"\",\n \"type\": \"uint256\"\n }\n ],\n \"stateMutability\": \"view\",\n \"type\": \"function\"\n },\n {\n \"inputs\": [\n {\n \"internalType\": \"uint256\",\n \"name\": \"newValue\",\n \"type\": \"uint256\"\n }\n ],\n \"name\": \"set\",\n \"outputs\": [],\n \"stateMutability\": \"nonpayable\",\n \"type\": \"function\"\n }\n ]\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_1","title":"Response","text":"FireFly generates and returns the the full FireFly Interface for the SimpleStorage contract in the response body:
{\n \"namespace\": \"default\",\n \"name\": \"\",\n \"description\": \"\",\n \"version\": \"\",\n \"methods\": [\n {\n \"name\": \"get\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n {\n \"name\": \"set\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#broadcast-the-contract-interface","title":"Broadcast the contract interface","text":"Now that we have a FireFly Interface representation of our smart contract, we want to broadcast that to the entire network. This broadcast will be pinned to the blockchain, so we can always refer to this specific name and version, and everyone in the network will know exactly which contract interface we are talking about.
We will take the output from the previous HTTP response above, fill in the name and version and then POST that to the /contracts/interfaces API endpoint.
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces?publish=true
NOTE: Without passing the query parameter publish=true when the interface is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the interface, a subsequent API call would need to be made to /contracts/interfaces/{name}/{version}/publish
{\n \"namespace\": \"default\",\n \"name\": \"SimpleStorage\",\n \"version\": \"v1.0.0\",\n \"description\": \"\",\n \"methods\": [\n {\n \"name\": \"get\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n {\n \"name\": \"set\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_2","title":"Response","text":"{\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"message\": \"3cd0dde2-1e39-4c9e-a4a1-569e87cca93a\",\n \"namespace\": \"default\",\n \"name\": \"SimpleStorage\",\n \"description\": \"\",\n \"version\": \"v1.0.0\",\n \"methods\": [\n {\n \"id\": \"56467890-5713-4463-84b8-4537fcb63d8b\",\n \"contract\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"name\": \"get\",\n \"namespace\": \"default\",\n \"pathname\": \"get\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n {\n \"id\": \"6b254d1d-5f5f-491e-bbd2-201e96892e1a\",\n \"contract\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"pathname\": \"set\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"id\": \"aa1fe67b-b2ac-41af-a7e7-7ad54a30a78d\",\n \"contract\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\",\n \"namespace\": \"default\",\n \"pathname\": \"Changed\",\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n }\n ],\n \"published\": true\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#create-an-http-api-for-the-contract","title":"Create an HTTP API for the contract","text":"Now comes the fun part where we see some of the powerful, developer-friendly features of FireFly. The next thing we're going to do is tell FireFly to build an HTTP API for this smart contract, complete with an OpenAPI Specification and Swagger UI. As part of this, we'll also tell FireFly where the contract is on the blockchain. Like the interface broadcast above, this will also generate a broadcast which will be pinned to the blockchain so all the members of the network will be aware of and able to interact with this API.
We need to copy the id field we got in the response from the previous step to the interface.id field in the request body below. We will also pick a name that will be part of the URL for our HTTP API, so be sure to pick a name that is URL friendly. In this case we'll call it simple-storage. Lastly, in the location.address field, we're telling FireFly where an instance of the contract is deployed on-chain.
NOTE: The location field is optional here, but if it is omitted, it will be required in every request to invoke or query the contract. This can be useful if you have multiple instances of the same contract deployed to different addresses.
POST http://localhost:5000/api/v1/namespaces/default/apis?publish=true
NOTE: Without passing the query parameter publish=true when the API is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the API, a subsequent API call would need to be made to /apis/{apiName}/publish
{\n \"name\": \"simple-storage\",\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_3","title":"Response","text":"{\n \"id\": \"9a681ec6-1dee-42a0-b91b-61d23a814b0f\",\n \"namespace\": \"default\",\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"name\": \"simple-storage\",\n \"message\": \"d90d0386-8874-43fb-b7d3-485c22f35f47\",\n \"urls\": {\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api\"\n },\n \"published\": true\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#view-openapi-spec-for-the-contract","title":"View OpenAPI spec for the contract","text":"You'll notice in the response body that there are a couple of URLs near the bottom. If you navigate to the one labeled ui in your browser, you should see the Swagger UI for your smart contract.
Now that we've got everything set up, it's time to use our smart contract! We're going to make a POST request to the invoke/set endpoint to set the integer value on-chain. Let's set it to the value of 3 right now.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/invoke/set
{\n \"input\": {\n \"newValue\": 3\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_4","title":"Response","text":"{\n \"id\": \"41c67c63-52cf-47ce-8a59-895fe2ffdc86\"\n}\n You'll notice that we just get an ID back here, and that's expected due to the asynchronous programming model of working with smart contracts in FireFly. To see what the value is now, we can query the smart contract. In a little bit, we'll also subscribe to the events emitted by this contract so we can know when the value is updated in realtime.
"},{"location":"tutorials/custom_contracts/ethereum/#query-the-current-value","title":"Query the current value","text":"To make a read-only request to the blockchain to check the current value of the stored integer, we can make a POST to the query/get endpoint.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/query/get
{}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_5","title":"Response","text":"{\n \"output\": \"3\"\n}\n NOTE: Some contracts may have queries that require input parameters. That's why the query endpoint is a POST, rather than a GET so that parameters can be passed as JSON in the request body. This particular function does not have any parameters, so we just pass an empty JSON object.
Some smart contract functions may accept or require additional options to be passed with the request. For example, a Solidity function might be payable, meaning that a value field must be specified, indicating an amount of ETH to be transferred with the request. Each of your smart contract API's /invoke or /query endpoints support an options object in addition to the input arguments for the function itself.
Here is an example of sending 100 wei with a transaction:
"},{"location":"tutorials/custom_contracts/ethereum/#request_6","title":"Request","text":"POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/invoke/set
{\n \"input\": {\n \"newValue\": 3\n },\n \"options\": {\n \"value\": 100\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_6","title":"Response","text":"{\n \"id\": \"41c67c63-52cf-47ce-8a59-895fe2ffdc86\"\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#create-a-blockchain-event-listener","title":"Create a blockchain event listener","text":"Now that we've seen how to submit transactions and preform read-only queries to the blockchain, let's look at how to receive blockchain events so we know when things are happening in realtime.
If you look at the source code for the smart contract we're working with above, you'll notice that it emits an event when the stored value of the integer is set. In order to receive these events, we first need to instruct FireFly to listen for this specific type of blockchain event. To do this, we create an Event Listener. The /contracts/listeners endpoint is RESTful so there are POST, GET, and DELETE methods available on it. To create a new listener, we will make a POST request. We are going to tell FireFly to listen to events with name \"Changed\" from the FireFly Interface we defined earlier, referenced by its ID. We will also tell FireFly which contract address we expect to emit these events, and the topic to assign these events to. You can specify multiple filters for a listener, in this case we only specify one for our event. Topics are a way for applications to subscribe to events they are interested in.
POST http://localhost:5000/api/v1/namespaces/default/contracts/listeners
{\n \"filters\": [\n {\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"eventPath\": \"Changed\"\n }\n ],\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"topic\": \"simple-storage\"\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_7","title":"Response","text":"{\n \"id\": \"e7c8457f-4ffd-42eb-ac11-4ad8aed30de1\",\n \"interface\": {\n \"id\": \"55fdb62a-fefc-4313-99e4-e3f95fcca5f0\"\n },\n \"namespace\": \"default\",\n \"name\": \"019104d7-bb0a-c008-76a9-8cb923d91b37\",\n \"backendId\": \"019104d7-bb0a-c008-76a9-8cb923d91b37\",\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"created\": \"2024-07-30T18:12:12.704964Z\",\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"signature\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1:Changed(address,uint256) [i=0]\",\n \"topic\": \"simple-storage\",\n \"options\": {\n \"firstEvent\": \"newest\"\n },\n \"filters\": [\n {\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"interface\": {\n \"id\": \"55fdb62a-fefc-4313-99e4-e3f95fcca5f0\"\n },\n \"signature\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1:Changed(address,uint256) [i=0]\"\n }\n ]\n}\n We can see in the response, that FireFly pulls all the schema information from the FireFly Interface that we broadcasted earlier and creates the listener with that schema. This is useful so that we don't have to enter all of that data again.
"},{"location":"tutorials/custom_contracts/ethereum/#querying-listener-status","title":"Querying listener status","text":"If you are interested in learning about the current state of a listener you have created, you can query with the fetchstatus parameter. For FireFly stacks with an EVM compatible blockchain connector, the response will include checkpoint information and if the listener is currently in catchup mode.
GET http://localhost:5000/api/v1/namespaces/default/contracts/listeners/1bfa3b0f-3d90-403e-94a4-af978d8c5b14?fetchstatus
{\n \"id\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\",\n \"interface\": {\n \"id\": \"8bdd27a5-67c1-4960-8d1e-7aa31b9084d3\"\n },\n \"namespace\": \"default\",\n \"name\": \"sb-66209ffc-d355-4ac0-7151-bc82490ca9df\",\n \"protocolId\": \"sb-66209ffc-d355-4ac0-7151-bc82490ca9df\",\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"created\": \"2022-02-17T22:02:36.34549538Z\",\n \"event\": {\n \"name\": \"Changed\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"from\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\",\n \"indexed\": true\n }\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\",\n \"internalType\": \"uint256\"\n }\n }\n }\n ]\n },\n \"status\": {\n \"checkpoint\": {\n \"block\": 0,\n \"transactionIndex\": -1,\n \"logIndex\": -1\n },\n \"catchup\": true\n },\n \"options\": {\n \"firstEvent\": \"oldest\"\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#subscribe-to-events-from-our-contract","title":"Subscribe to events from our contract","text":"Now that we've told FireFly that it should listen for specific events on the blockchain, we can set up a Subscription for FireFly to send events to our app. To set up our subscription, we will make a POST to the /subscriptions endpoint.
We will set a friendly name simple-storage to identify the Subscription when we are connecting to it in the next step.
We're also going to set up a filter to only send events blockchain events from our listener that we created in the previous step. To do that, we'll copy the listener ID from the step above (1bfa3b0f-3d90-403e-94a4-af978d8c5b14) and set that as the value of the listener field in the example below:
POST http://localhost:5000/api/v1/namespaces/default/subscriptions
{\n \"namespace\": \"default\",\n \"name\": \"simple-storage\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"blockchainevent\": {\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\"\n }\n },\n \"options\": {\n \"firstEvent\": \"oldest\"\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_8","title":"Response","text":"{\n \"id\": \"f826269c-65ed-4634-b24c-4f399ec53a32\",\n \"namespace\": \"default\",\n \"name\": \"simple-storage\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\"\n }\n },\n \"options\": {\n \"firstEvent\": \"-1\",\n \"withData\": false\n },\n \"created\": \"2022-03-15T17:35:30.131698921Z\",\n \"updated\": null\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#receive-custom-smart-contract-events","title":"Receive custom smart contract events","text":"The last step is to connect a WebSocket client to FireFly to receive the event. You can use any WebSocket client you like, such as Postman or a command line app like websocat.
Connect your WebSocket client to ws://localhost:5000/ws.
After connecting the WebSocket client, send a message to tell FireFly to:
simple-storagedefault namespace{\n \"type\": \"start\",\n \"name\": \"simple-storage\",\n \"namespace\": \"default\",\n \"autoack\": true\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#websocket-event","title":"WebSocket event","text":"After creating the subscription, you should see an event arrive on the connected WebSocket client that looks something like this:
{\n \"id\": \"0f4a31d6-9743-4537-82df-5a9c76ccbd1e\",\n \"sequence\": 24,\n \"type\": \"blockchain_event_received\",\n \"namespace\": \"default\",\n \"reference\": \"dd3e1554-c832-47a8-898e-f1ee406bea41\",\n \"created\": \"2022-03-15T17:32:27.824417878Z\",\n \"blockchainevent\": {\n \"id\": \"dd3e1554-c832-47a8-898e-f1ee406bea41\",\n \"sequence\": 7,\n \"source\": \"ethereum\",\n \"namespace\": \"default\",\n \"name\": \"Changed\",\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\",\n \"protocolId\": \"000000000010/000000/000000\",\n \"output\": {\n \"from\": \"0xb7e6a5eb07a75a2c81801a157192a82bcbce0f21\",\n \"value\": \"3\"\n },\n \"info\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\",\n \"blockNumber\": \"10\",\n \"logIndex\": \"0\",\n \"signature\": \"Changed(address,uint256)\",\n \"subId\": \"sb-724b8416-786d-4e67-4cd3-5bae4a26eb0e\",\n \"timestamp\": \"1647365460\",\n \"transactionHash\": \"0xd5b5c716554097b2868d8705241bb2189bb76d16300f702ad05b0b02fccc4afb\",\n \"transactionIndex\": \"0x0\"\n },\n \"timestamp\": \"2022-03-15T17:31:00Z\",\n \"tx\": {\n \"type\": \"\"\n }\n },\n \"subscription\": {\n \"id\": \"f826269c-65ed-4634-b24c-4f399ec53a32\",\n \"namespace\": \"default\",\n \"name\": \"simple-storage\"\n }\n}\n You can see in the event received over the WebSocket connection, the blockchain event that was emitted from our first transaction, which happened in the past. We received this event, because when we set up both the Listener, and the Subscription, we specified the \"firstEvent\" as \"oldest\". This tells FireFly to look for this event from the beginning of the blockchain, and that your app is interested in FireFly events since the beginning of FireFly's event history.
In the event, we can also see the blockchainevent itself, which has an output object. These are the params in our FireFly Interface, and the actual output of the event. Here we can see the value is 3 which is what we set the integer to in our original transaction.
If you query by the ID of your subscription with the fetchstatus parameter, you can see its current offset.
GET http://localhost:5000/api/v1/namespaces/default/subscriptions/f826269c-65ed-4634-b24c-4f399ec53a32
{\n \"id\": \"f826269c-65ed-4634-b24c-4f399ec53a32\",\n \"namespace\": \"default\",\n \"name\": \"simple-storage\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {\n \"listener\": \"1bfa3b0f-3d90-403e-94a4-af978d8c5b14\"\n }\n },\n \"options\": {\n \"firstEvent\": \"-1\",\n \"withData\": false\n },\n \"status\": {\n \"offset\": 20\n }\n \"created\": \"2022-03-15T17:35:30.131698921Z\",\n \"updated\": null\n}\n You've reached the end of the main guide to working with custom smart contracts in FireFly. Hopefully this was helpful and gives you what you need to get up and running with your own contracts. There are several additional ways to invoke or query smart contracts detailed below, so feel free to keep reading if you're curious.
"},{"location":"tutorials/custom_contracts/ethereum/#appendix-i-work-with-a-custom-contract-without-creating-a-named-api","title":"Appendix I: Work with a custom contract without creating a named API","text":"FireFly aims to offer a developer-friendly and flexible approach to using custom smart contracts. The guide above has detailed the most robust and feature-rich way to use custom contracts with FireFly, but there are several alternative API usage patterns available as well.
It is possible to broadcast a contract interface and use a smart contract that implements that interface without also broadcasting a named API as above. There are several key differences (which may or may not be desirable) compared to the method outlined in the full guide above:
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces/8bdd27a5-67c1-4960-8d1e-7aa31b9084d3/invoke/set
{\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"input\": {\n \"newValue\": 7\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_9","title":"Response","text":"{\n \"id\": \"f310fa4a-73d8-4777-9f9d-dfa5012a052f\"\n}\n All of the same invoke, query, and subscribe endpoints are available on the contract interface itself.
"},{"location":"tutorials/custom_contracts/ethereum/#appendix-ii-work-directly-with-contracts-with-inline-requests","title":"Appendix II: Work directly with contracts with inline requests","text":"The final way of working with custom smart contracts with FireFly is to just put everything FireFly needs all in one request, each time a contract is invoked or queried. This is the most lightweight, but least feature-rich way of using a custom contract.
To do this, we will need to put both the contract location, and a subset of the FireFly Interface that describes the method we want to invoke in the request body, in addition to the function input.
"},{"location":"tutorials/custom_contracts/ethereum/#request_10","title":"Request","text":"POST http://localhost:5000/api/v1/namespaces/default/contracts/invoke
{\n \"location\": {\n \"address\": \"0xa5ea5d0a6b2eaf194716f0cc73981939dca26da1\"\n },\n \"method\": {\n \"name\": \"set\",\n \"params\": [\n {\n \"name\": \"x\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"uint256\"\n }\n }\n }\n ],\n \"returns\": []\n },\n \"input\": {\n \"x\": 42\n }\n}\n"},{"location":"tutorials/custom_contracts/ethereum/#response_10","title":"Response","text":"{\n \"id\": \"386d3e23-e4bc-4a9b-bc1f-452f0a8c9ae5\"\n}\n"},{"location":"tutorials/custom_contracts/fabric/","title":"Work with Hyperledger Fabric chaincodes","text":"This guide describes the steps to deploy a chaincode to a Hyperledger Fabric blockchain and use FireFly to interact with it in order to submit transactions, query for states and listening for events.
NOTE: This guide assumes that you are running a local FireFly stack with at least 2 members and a Fabric blockchain created by the FireFly CLI. If you need help getting that set up, please see the Getting Started guide to Start your environment.
"},{"location":"tutorials/custom_contracts/fabric/#example-smart-contract","title":"Example smart contract","text":"For this tutorial, we will be using a well known, but slightly modified smart contract called asset_transfer. It's based on the asset-transfer-basic chaincode in the fabric-samples project. Check out the code repository and use the source code provided below to replace part of the content of the file fabric-samples/asset-transfer-basic/chaincode-go/chaincode/smartcontract.go.
Find the following return statement in the function CreateAsset:
return ctx.GetStub().PutState(id, assetJSON)\n and replace it with the following, so that an event will be emitted when the transaction is committed to the channel ledger:
err = ctx.GetStub().PutState(id, assetJSON)\n if err != nil {\n return err\n }\n return ctx.GetStub().SetEvent(\"AssetCreated\", assetJSON)\n"},{"location":"tutorials/custom_contracts/fabric/#create-the-chaincode-package","title":"Create the chaincode package","text":"Use the peer command to create the chaincode package for deployment. You can download the peer binary from the releases page of the Fabric project or build it from source.
~ johndoe$ cd fabric-samples/asset-transfer-basic/chaincode-go\n chaincode-go johndoe$ touch core.yaml\n chaincode-go johndoe$ peer lifecycle chaincode package -p . --label asset_transfer ./asset_transfer.zip\n The peer command requires an empty core.yaml file to be present in the working directory to perform the packaging. That's what touch core.yaml did above
The resulting asset_transfer.zip archive file will be used in the next step to deploy to the Fabric network used in FireFly.
Deployment of smart contracts is not currently within the scope of responsibility for FireFly. You can use your standard blockchain specific tools to deploy your contract to the blockchain you are using.
The FireFly CLI provides a convenient function to deploy a chaincode package to a local FireFly stack.
NOTE: The contract deployment function of the FireFly CLI is a convenience function to speed up local development, and not intended for production applications
~ johndoe$ ff help deploy fabric\nDeploy a packaged chaincode to the Fabric network used by a FireFly stack\n\nUsage:\n ff deploy fabric <stack_name> <chaincode_package> <channel> <chaincodeName> <version> [flags]\n Notice the various parameters used by the command ff deploy fabric. We'll tell the FireFly to deploy using the following parameter values, if your stack setup is different, update the command accordingly:
devfirefly (this is the channel that is created by the FireFly CLI when bootstrapping the stack, replace if you use a different channel in your setup)asset_transfer (must match the value of the --label parameter when creating the chaincode package)1.0$ ff deploy fabric dev asset_transfer.zip firefly asset_transfer 1.0\ninstalling chaincode\nquerying installed chaincode\napproving chaincode\ncommitting chaincode\n{\n \"chaincode\": \"asset_transfer\",\n \"channel\": \"firefly\"\n}\n"},{"location":"tutorials/custom_contracts/fabric/#the-firefly-interface-format","title":"The FireFly Interface Format","text":"In order to teach FireFly how to interact with the chaincode, a FireFly Interface (FFI) document is needed. While Ethereum (or other EVM based blockchains) requires an Application Binary Interface (ABI) to govern the interaction between the client and the smart contract, which is specific to each smart contract interface design, Fabric defines a generic chaincode interface and leaves the encoding and decoding of the parameter values to the discretion of the chaincode developer.
As a result, the FFI document for a Fabric chaincode must be hand-crafted. The following FFI sample demonstrates the specification for the following common cases:
CreateAsset input parametersGetAllAssets outputAssetCreated properties{\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"description\": \"Spec interface for the asset-transfer-basic golang chaincode\",\n \"version\": \"1.0\",\n \"methods\": [\n {\n \"name\": \"GetAllAssets\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"object\",\n \"properties\": {\n \"type\": \"string\"\n }\n }\n }\n }\n ]\n },\n {\n \"name\": \"CreateAsset\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"AssetCreated\"\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/fabric/#input-parameters","title":"Input parameters","text":"For the params section of the CreateAsset function, it is critical that the sequence of the properties (id, color, size, owner, value) matches the order of the input parameters in the chaincode's function signature:
func CreateAsset(ctx contractapi.TransactionContextInterface, id string, color string, size int, owner string, appraisedValue int) error\n"},{"location":"tutorials/custom_contracts/fabric/#return-values","title":"Return values","text":"FireFly can automatically decode JSON payloads in the return values. That's why the returns section of the GetAllAssets function only needs to specify the type as array of objects, without having to specify the detailed structure of the JSON payload.
On the other hand, if certain properties of the returned value are to be hidden, then you can provide a detailed structure of the JSON object with the desired properties. This is demonstrated in the JSON structure for the event payload, see below, where the property AppraisedValue is omitted from the output.
For events, FireFly automatically decodes JSON payloads. If the event payload is not JSON, base64 encoded bytes will be returned instead. For the events section of the FFI, only the name property needs to be specified.
Now that we have a FireFly Interface representation of our chaincode, we want to broadcast that to the entire network. This broadcast will be pinned to the blockchain, so we can always refer to this specific name and version, and everyone in the network will know exactly which contract interface we are talking about.
We will use the FFI JSON constructed above and POST that to the /contracts/interfaces API endpoint.
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces?publish=true
NOTE: Without passing the query parameter publish=true when the interface is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the interface, a subsequent API call would need to be made to /contracts/interfaces/{name}/{version}/publish
{\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"description\": \"Spec interface for the asset-transfer-basic golang chaincode\",\n \"version\": \"1.0\",\n \"methods\": [\n {\n \"name\": \"GetAllAssets\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"object\",\n \"properties\": {\n \"type\": \"string\"\n }\n }\n }\n }\n ]\n },\n {\n \"name\": \"CreateAsset\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"name\": \"AssetCreated\"\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response","title":"Response","text":"{\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"message\": \"8a01fc83-5729-418b-9706-6fc17c8d2aac\",\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"description\": \"Spec interface for the asset-transfer-basic golang chaincode\",\n \"version\": \"1.1\",\n \"methods\": [\n {\n \"id\": \"b31e3623-35e8-4918-bf8c-1b0d6c01de25\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"name\": \"GetAllAssets\",\n \"namespace\": \"default\",\n \"pathname\": \"GetAllAssets\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": [\n {\n \"name\": \"\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"object\",\n \"properties\": {\n \"type\": \"string\"\n }\n }\n }\n }\n ]\n },\n {\n \"id\": \"e5a170d1-0be1-4697-800b-f4bcfaf71cf6\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"name\": \"CreateAsset\",\n \"namespace\": \"default\",\n \"pathname\": \"CreateAsset\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": [\n {\n \"id\": \"27564533-30bd-4536-884e-02e5d79ec238\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"namespace\": \"default\",\n \"pathname\": \"AssetCreated\",\n \"signature\": \"\",\n \"name\": \"AssetCreated\",\n \"description\": \"\",\n \"params\": null\n }\n ]\n}\n NOTE: We can broadcast this contract interface conveniently with the help of FireFly Sandbox running at http://127.0.0.1:5108
Contracts SectionDefine a Contract InterfaceFFI - FireFly Interface in the Interface Fromat dropdownFFI JSON crafted by you into the Schema FieldRunNow comes the fun part where we see some of the powerful, developer-friendly features of FireFly. The next thing we're going to do is tell FireFly to build an HTTP API for this chaincode, complete with an OpenAPI Specification and Swagger UI. As part of this, we'll also tell FireFly where the chaincode is on the blockchain.
Like the interface broadcast above, this will also generate a broadcast which will be pinned to the blockchain so all the members of the network will be aware of and able to interact with this API.
We need to copy the id field we got in the response from the previous step to the interface.id field in the request body below. We will also pick a name that will be part of the URL for our HTTP API, so be sure to pick a name that is URL friendly. In this case we'll call it asset_transfer. Lastly, in the location field, we're telling FireFly where an instance of the chaincode is deployed on-chain, which is a chaincode named asset_transfer in the channel firefly.
NOTE: The location field is optional here, but if it is omitted, it will be required in every request to invoke or query the chaincode. This can be useful if you have multiple instances of the same chaincode deployed to different channels.
POST http://localhost:5000/api/v1/namespaces/default/apis?publish=true
NOTE: Without passing the query parameter publish=true when the API is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the API, a subsequent API call would need to be made to /apis/{apiName}/publish
{\n \"name\": \"asset_transfer\",\n \"interface\": {\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\"\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n }\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_1","title":"Response","text":"{\n \"id\": \"a9a9ab4e-2544-45d5-8824-3c05074fbf75\",\n \"namespace\": \"default\",\n \"interface\": {\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\"\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"name\": \"asset_transfer\",\n \"message\": \"5f1556a1-5cb1-4bc6-8611-d8f88ccf9c30\",\n \"urls\": {\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/asset_transfer/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/asset_transfer/api\"\n }\n}\n NOTE: We can create this Http API conveniently with the help of FireFly Sandbox running at http://127.0.0.1:5108
Contracts SectionRegister a Contract APIContract Interface dropdownName Field, give a name that will be part of the URL for your Http APIChaincode Field, give your chaincode name for which you wrote the FFIChannel Field, give the channel name where your chaincode is deployedRunYou'll notice in the response body that there are a couple of URLs near the bottom. If you navigate to the one labeled ui in your browser, you should see the Swagger UI for your chaincode.
The /invoke endpoints in the generated API are for submitting transactions. These endpoints will be mapped to the POST /transactions endpoint of the FabConnect API.
The /query endpoints in the generated API, on the other hand, are for sending query requests. These endpoints will be mapped to the POST /query endpoint of the Fabconnect API, which under the cover only sends chaincode endorsement requests to the target peer node without sending a trasaction payload to the orderer node.
Now that we've got everything set up, it's time to use our chaincode! We're going to make a POST request to the invoke/CreateAsset endpoint to create a new asset.
POST http://localhost:5000/api/v1/namespaces/default/apis/asset_transfer/invoke/CreateAsset
{\n \"input\": {\n \"color\": \"blue\",\n \"id\": \"asset-01\",\n \"owner\": \"Harry\",\n \"size\": \"30\",\n \"value\": \"23400\"\n }\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_2","title":"Response","text":"{\n \"id\": \"b8e905cc-bc23-434a-af7d-13c6d85ae545\",\n \"namespace\": \"default\",\n \"tx\": \"79d2668e-4626-4634-9448-1b40fa0d9dfd\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Pending\",\n \"plugin\": \"fabric\",\n \"input\": {\n \"input\": {\n \"color\": \"blue\",\n \"id\": \"asset-02\",\n \"owner\": \"Harry\",\n \"size\": \"30\",\n \"value\": \"23400\"\n },\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"key\": \"Org1MSP::x509::CN=org_0,OU=client::CN=fabric_ca.org1.example.com,OU=Hyperledger FireFly,O=org1.example.com,L=Raleigh,ST=North Carolina,C=US\",\n \"location\": {\n \"chaincode\": \"asset_transfer\",\n \"channel\": \"firefly\"\n },\n \"method\": {\n \"description\": \"\",\n \"id\": \"e5a170d1-0be1-4697-800b-f4bcfaf71cf6\",\n \"interface\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\",\n \"name\": \"CreateAsset\",\n \"namespace\": \"default\",\n \"params\": [\n {\n \"name\": \"id\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"color\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"size\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\"\n }\n },\n {\n \"name\": \"value\",\n \"schema\": {\n \"type\": \"string\"\n }\n }\n ],\n \"pathname\": \"CreateAsset\",\n \"returns\": []\n },\n \"methodPath\": \"CreateAsset\",\n \"type\": \"invoke\"\n },\n \"created\": \"2022-05-02T17:08:40.811630044Z\",\n \"updated\": \"2022-05-02T17:08:40.811630044Z\"\n}\n You'll notice that we got an ID back with status Pending, and that's expected due to the asynchronous programming model of working with custom onchain logic in FireFly. To see what the latest state is now, we can query the chaincode. In a little bit, we'll also subscribe to the events emitted by this chaincode so we can know when the state is updated in realtime.
To make a read-only request to the blockchain to check the current list of assets, we can make a POST to the query/GetAllAssets endpoint.
POST http://localhost:5000/api/v1/namespaces/default/apis/asset_transfer/query/GetAllAssets
{}\n"},{"location":"tutorials/custom_contracts/fabric/#response_3","title":"Response","text":"[\n {\n \"AppraisedValue\": 23400,\n \"Color\": \"blue\",\n \"ID\": \"asset-01\",\n \"Owner\": \"Harry\",\n \"Size\": 30\n }\n]\n NOTE: Some chaincodes may have queries that require input parameters. That's why the query endpoint is a POST, rather than a GET so that parameters can be passed as JSON in the request body. This particular function does not have any parameters, so we just pass an empty JSON object.
Now that we've seen how to submit transactions and preform read-only queries to the blockchain, let's look at how to receive blockchain events so we know when things are happening in realtime.
If you look at the source code for the smart contract we're working with above, you'll notice that it emits an event when a new asset is created. In order to receive these events, we first need to instruct FireFly to listen for this specific type of blockchain event. To do this, we create an Event Listener.
The /contracts/listeners endpoint is RESTful so there are POST, GET, and DELETE methods available on it. To create a new listener, we will make a POST request. We are going to tell FireFly to listen to events with name \"AssetCreated\" from the FireFly Interface we defined earlier, referenced by its ID. We will also tell FireFly which channel and chaincode we expect to emit these events.
POST http://localhost:5000/api/v1/namespaces/default/contracts/listeners
{\n \"filters\": [\n {\n \"interface\": {\n \"id\": \"f1e5522c-59a5-4787-bbfd-89975e5b0954\"\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"event\": {\n \"name\": \"AssetCreated\"\n }\n }\n ],\n \"options\": {\n \"firstEvent\": \"oldest\"\n },\n \"topic\": \"assets\"\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_4","title":"Response","text":"{\n \"id\": \"d6b5e774-c9e5-474c-9495-ec07fa47a907\",\n \"namespace\": \"default\",\n \"name\": \"sb-44aa348a-bafb-4243-594e-dcad689f1032\",\n \"backendId\": \"sb-44aa348a-bafb-4243-594e-dcad689f1032\",\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"created\": \"2024-07-22T15:36:58.514085959Z\",\n \"event\": {\n \"name\": \"AssetCreated\",\n \"description\": \"\",\n \"params\": null\n },\n \"signature\": \"firefly-asset_transfer:AssetCreated\",\n \"topic\": \"assets\",\n \"options\": {\n \"firstEvent\": \"oldest\"\n },\n \"filters\": [\n {\n \"event\": {\n \"name\": \"AssetCreated\",\n \"description\": \"\",\n \"params\": null\n },\n \"location\": {\n \"channel\": \"firefly\",\n \"chaincode\": \"asset_transfer\"\n },\n \"signature\": \"firefly-asset_transfer:AssetCreated\"\n }\n ]\n}\n"},{"location":"tutorials/custom_contracts/fabric/#subscribe-to-events-from-our-contract","title":"Subscribe to events from our contract","text":"Now that we've told FireFly that it should listen for specific events on the blockchain, we can set up a Subscription for FireFly to send events to our client app. To set up our subscription, we will make a POST to the /subscriptions endpoint.
We will set a friendly name asset_transfer to identify the Subscription when we are connecting to it in the next step.
We're also going to set up a filter to only send events blockchain events from our listener that we created in the previous step. To do that, we'll copy the listener ID from the step above (6e7f5dd8-5a57-4163-a1d2-5654e784dc31) and set that as the value of the listener field in the example below:
POST http://localhost:5000/api/v1/namespaces/default/subscriptions
{\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"blockchainevent\": {\n \"listener\": \"6e7f5dd8-5a57-4163-a1d2-5654e784dc31\"\n }\n },\n \"options\": {\n \"firstEvent\": \"oldest\"\n }\n}\n"},{"location":"tutorials/custom_contracts/fabric/#response_5","title":"Response","text":"{\n \"id\": \"06d18b49-e763-4f5c-9e97-c25024fe57c8\",\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\",\n \"transport\": \"websockets\",\n \"filter\": {\n \"events\": \"blockchain_event_received\",\n \"message\": {},\n \"transaction\": {},\n \"blockchainevent\": {\n \"listener\": \"6e7f5dd8-5a57-4163-a1d2-5654e784dc31\"\n }\n },\n \"options\": {\n \"firstEvent\": \"-1\",\n \"withData\": false\n },\n \"created\": \"2022-05-02T17:22:06.480181291Z\",\n \"updated\": null\n}\n"},{"location":"tutorials/custom_contracts/fabric/#receive-custom-smart-contract-events","title":"Receive custom smart contract events","text":"The last step is to connect a WebSocket client to FireFly to receive the event. You can use any WebSocket client you like, such as Postman or a command line app like websocat.
Connect your WebSocket client to ws://localhost:5000/ws.
After connecting the WebSocket client, send a message to tell FireFly to:
asset_transferdefault namespace{\n \"type\": \"start\",\n \"name\": \"asset_transfer\",\n \"namespace\": \"default\",\n \"autoack\": true\n}\n"},{"location":"tutorials/custom_contracts/fabric/#websocket-event","title":"WebSocket event","text":"After creating the subscription, you should see an event arrive on the connected WebSocket client that looks something like this:
{\n \"id\": \"d9fb86b2-b25b-43b8-80d3-936c5daa5a66\",\n \"sequence\": 29,\n \"type\": \"blockchain_event_received\",\n \"namespace\": \"default\",\n \"reference\": \"e0d670b4-a1b6-4efd-a985-06dfaaa58fe3\",\n \"topic\": \"assets\",\n \"created\": \"2022-05-02T17:26:57.57612001Z\",\n \"blockchainEvent\": {\n \"id\": \"e0d670b4-a1b6-4efd-a985-06dfaaa58fe3\",\n \"source\": \"fabric\",\n \"namespace\": \"default\",\n \"name\": \"AssetCreated\",\n \"listener\": \"6e7f5dd8-5a57-4163-a1d2-5654e784dc31\",\n \"protocolId\": \"000000000015/000000/000000\",\n \"output\": {\n \"AppraisedValue\": 12300,\n \"Color\": \"red\",\n \"ID\": \"asset-01\",\n \"Owner\": \"Jerry\",\n \"Size\": 10\n },\n \"info\": {\n \"blockNumber\": 15,\n \"chaincodeId\": \"asset_transfer\",\n \"eventIndex\": 0,\n \"eventName\": \"AssetCreated\",\n \"subId\": \"sb-2cac2bfa-38af-4408-4ff3-973421410e5d\",\n \"timestamp\": 1651512414920972300,\n \"transactionId\": \"172637bf59a3520ca6dd02f716e1043ba080e10e1cd2f98b4e6b85abcc6a6d69\",\n \"transactionIndex\": 0\n },\n \"timestamp\": \"2022-05-02T17:26:54.9209723Z\",\n \"tx\": {\n \"type\": \"\",\n \"blockchainId\": \"172637bf59a3520ca6dd02f716e1043ba080e10e1cd2f98b4e6b85abcc6a6d69\"\n }\n },\n \"subscription\": {\n \"id\": \"06d18b49-e763-4f5c-9e97-c25024fe57c8\",\n \"namespace\": \"default\",\n \"name\": \"asset_transfer\"\n }\n}\n You can see in the event received over the WebSocket connection, the blockchain event that was emitted from our first transaction, which happened in the past. We received this event, because when we set up both the Listener, and the Subscription, we specified the \"firstEvent\" as \"oldest\". This tells FireFly to look for this event from the beginning of the blockchain, and that your app is interested in FireFly events since the beginning of FireFly's event history.
In the event, we can also see the blockchainEvent itself, which has an output object. This contains the event payload that was set by the chaincode.
This guide describes how to associate an arbitrary off-chain payload with a blockchain transaction on a contract of your own design. A hash of the payload will be recorded as part of the blockchain transaction, and on the receiving side, FireFly will ensure that both the on-chain and off-chain pieces are received and aggregated together.
NOTE: This is an advanced FireFly feature. Before following any of the steps in this guide, you should be very familiar and comfortable with the basic features of how broadcast messages and private messages work, be proficient at custom contract development on your blockchain of choice, and understand the fundamentals of how FireFly interacts with custom contracts.
"},{"location":"tutorials/custom_contracts/pinning/#designing-a-compatible-contract","title":"Designing a compatible contract","text":"In order to allow pinning a FireFly message batch with a custom contract transaction, your contract must meet certain criteria.
First, any external method of the contract that will be used for associating with off-chain payloads must provide an extra parameter for passing the encoded batch data. This must be the last parameter in the method signature. This convention is chosen partly to align with the Ethereum ERC5750 standard, but should serve as a straightforward guideline for nearly any blockchain.
Second, this method must emit a BatchPin event that can be received and parsed by FireFly. Exactly how the data is unpacked and used to emit this event will differ for each blockchain.
import \"@hyperledger/firefly-contracts/contracts/IBatchPin.sol\";\n\ncontract CustomPin {\n IBatchPin firefly;\n\n function setFireFlyAddress(address addr) external {\n firefly = IBatchPin(addr);\n }\n\n function sayHello(bytes calldata data) external {\n require(\n address(firefly) != address(0),\n \"CustomPin: FireFly address has not been set\"\n );\n\n /* do custom things */\n\n firefly.pinBatchData(data);\n }\n}\n bytes). The method must invoke the pinBatchData method of the FireFly Multiparty Contract and pass along this data payload. It is generally good practice to trigger this as a final step before returning, after the method has performed its own logic./status API (under multiparty.contract.location as of FireFly v1.1.0). However, the application must also consider how appropriately secure this functionality, and how to update this location if a multiparty \"network action\" is used to migrate the network onto a new FireFly multiparty contract.package chaincode\n\nimport (\n \"encoding/json\"\n \"fmt\"\n\n \"github.com/hyperledger/fabric-contract-api-go/contractapi\"\n \"github.com/hyperledger/firefly/custompin_sample/batchpin\"\n)\n\ntype SmartContract struct {\n contractapi.Contract\n}\n\nfunc (s *SmartContract) MyCustomPin(ctx contractapi.TransactionContextInterface, data string) error {\n event, err := batchpin.BuildEventFromString(ctx, data)\n if err != nil {\n return err\n }\n bytes, err := json.Marshal(event)\n if err != nil {\n return fmt.Errorf(\"failed to marshal event: %s\", err)\n }\n return ctx.GetStub().SetEvent(\"BatchPin\", bytes)\n}\n string). The method must unpack this argument into a JSON object.BatchPin event in the same format that is used by the FireFly Multiparty Contract.Once you have a contract designed, you can initialize your environment using the blockchain of your choice.
No special initialization arguments are needed for Ethereum.
If you are using Fabric, you must pass the --custom-pin-support argument when initializing your FireFly stack. This will ensure that the BatchPin event listener listens to events from all chaincode deployed on the default channel, instead of only listening to events from the pre-deployed FireFly chaincode.
You can follow the normal steps for Ethereum or Fabric to define your contract interface and API in FireFly. When invoking the contract, you can include a message payload alongside the other parameters.
POST http://localhost:5000/api/v1/namespaces/default/apis/custom-pin/invoke/sayHello
{\n \"input\": {},\n \"message\": {\n \"data\": [\n {\n \"value\": \"payload here\"\n }\n ]\n }\n}\n"},{"location":"tutorials/custom_contracts/pinning/#listening-for-events","title":"Listening for events","text":"All parties that receive the message will receive a message_confirmed on their event listeners. This event confirms that the off-chain payload has been received (via data exchange or shared storage) and that the blockchain transaction has been received and sequenced. It is guaranteed that these message_confirmed events will be ordered based on the sequence of the on-chain transactions, regardless of when the off-chain payload becomes available. This means that all parties will order messages on a given topic in exactly the same order, allowing for deterministic but decentralized event-driven architecture.
This guide describes the steps to deploy a smart contract to a Tezos blockchain and use FireFly to interact with it in order to submit transactions, query for states and listening for events.
"},{"location":"tutorials/custom_contracts/tezos/#smart-contract-languages","title":"Smart Contract Languages","text":"Smart contracts on Tezos can be programmed using familiar, developer-friendly languages. All features available on Tezos can be written in any of the high-level languages used to write smart contracts, such as Archetype, LIGO, and SmartPy. These languages all compile down to Michelson and you can switch between languages based on your preferences and projects.
NOTE: For this tutorial we are going to use SmartPy for building Tezos smart contracts utilizing the broadly adopted Python language.
"},{"location":"tutorials/custom_contracts/tezos/#example-smart-contract","title":"Example smart contract","text":"First let's look at a simple contract smart contract called SimpleStorage, which we will be using on a Tezos blockchain. Here we have one state variable called 'storedValue' and initialized with the value 12. During initialization the type of the variable was defined as 'int'. You can see more at SmartPy types. And then we added a simple test, which set the storage value to 15 and checks that the value was changed as expected.
NOTE: Smart contract's tests (marked with @sp.add_test annotation) are used to verify the validity of contract entrypoints and do not affect the state of the contract during deployment.
Here is the source for this contract:
import smartpy as sp\n\n@sp.module\ndef main():\n # Declares a new contract\n class SimpleStorage(sp.Contract):\n # Storage. Persists in between transactions\n def __init__(self, value):\n self.data.x = value\n\n # Allows the stored integer to be changed\n @sp.entrypoint\n def set(self, params):\n self.data.x = params.value\n\n # Returns the currently stored integer\n @sp.onchain_view()\n def get(self):\n return self.data.x\n\n@sp.add_test()\ndef test():\n # Create a test scenario\n scenario = sp.test_scenario(\"Test simple storage\", main)\n scenario.h1(\"SimpleStorage\")\n\n # Initialize the contract\n c = main.SimpleStorage(12)\n\n # Run some test cases\n scenario += c\n c.set(value=15)\n scenario.verify(c.data.x == 15)\n scenario.verify(scenario.compute(c.get()) == 15)\n"},{"location":"tutorials/custom_contracts/tezos/#contract-deployment-via-smartpy-ide","title":"Contract deployment via SmartPy IDE","text":"To deploy the contract, we will use SmartPy IDE.
Here we can see that our new contract address is KT1ED4gj2xZnp8318yxa5NpvyvW15pqe4yFg. This is the address that we will reference in the rest of this guide.
To deploy the contract we can use HTTP API: POST http://localhost:5000/api/v1/namespaces/default/contracts/deploy
{\n \"contract\": {\n \"code\": [\n {\n \"prim\": \"storage\",\n \"args\": [\n {\n \"prim\": \"int\"\n }\n ]\n },\n {\n \"prim\": \"parameter\",\n \"args\": [\n {\n \"prim\": \"int\",\n \"annots\": [\"%set\"]\n }\n ]\n },\n {\n \"prim\": \"code\",\n \"args\": [\n [\n {\n \"prim\": \"CAR\"\n },\n {\n \"prim\": \"NIL\",\n \"args\": [\n {\n \"prim\": \"operation\"\n }\n ]\n },\n {\n \"prim\": \"PAIR\"\n }\n ]\n ]\n },\n {\n \"prim\": \"view\",\n \"args\": [\n {\n \"string\": \"get\"\n },\n {\n \"prim\": \"unit\"\n },\n {\n \"prim\": \"int\"\n },\n [\n {\n \"prim\": \"CDR\"\n }\n ]\n ]\n }\n ],\n \"storage\": {\n \"int\": \"12\"\n }\n }\n}\n The contract field has two fields - code with Michelson code of contract and storage with initial Storage values.
The response of request above:
{\n \"id\": \"0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"namespace\": \"default\",\n \"tx\": \"21d03e6d-d106-48f4-aacd-688bf17b71fd\",\n \"type\": \"blockchain_deploy\",\n \"status\": \"Pending\",\n \"plugin\": \"tezos\",\n \"input\": {\n \"contract\": {\n \"code\": [\n {\n \"args\": [\n {\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"storage\"\n },\n {\n \"args\": [\n {\n \"annots\": [\"%set\"],\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"parameter\"\n },\n {\n \"args\": [\n [\n {\n \"prim\": \"CAR\"\n },\n {\n \"args\": [\n {\n \"prim\": \"operation\"\n }\n ],\n \"prim\": \"NIL\"\n },\n {\n \"prim\": \"PAIR\"\n }\n ]\n ],\n \"prim\": \"code\"\n },\n {\n \"args\": [\n {\n \"string\": \"get\"\n },\n {\n \"prim\": \"unit\"\n },\n {\n \"prim\": \"int\"\n },\n [\n {\n \"prim\": \"CDR\"\n }\n ]\n ],\n \"prim\": \"view\"\n }\n ],\n \"storage\": {\n \"int\": \"12\"\n }\n },\n \"definition\": null,\n \"input\": null,\n \"key\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"options\": null\n },\n \"created\": \"2024-04-01T14:20:20.665039Z\",\n \"updated\": \"2024-04-01T14:20:20.665039Z\"\n}\n The success result of deploy can be checked by GET http://localhost:5000/api/v1/namespaces/default/operations/0c3810c7-baed-4077-9d2c-af316a4a567f where 0c3810c7-baed-4077-9d2c-af316a4a567f is operation id from response above.
The success response:
{\n \"id\": \"0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"namespace\": \"default\",\n \"tx\": \"21d03e6d-d106-48f4-aacd-688bf17b71fd\",\n \"type\": \"blockchain_deploy\",\n \"status\": \"Succeeded\",\n \"plugin\": \"tezos\",\n \"input\": {\n \"contract\": {\n \"code\": [\n {\n \"args\": [\n {\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"storage\"\n },\n {\n \"args\": [\n {\n \"annots\": [\"%set\"],\n \"prim\": \"int\"\n }\n ],\n \"prim\": \"parameter\"\n },\n {\n \"args\": [\n [\n {\n \"prim\": \"CAR\"\n },\n {\n \"args\": [\n {\n \"prim\": \"operation\"\n }\n ],\n \"prim\": \"NIL\"\n },\n {\n \"prim\": \"PAIR\"\n }\n ]\n ],\n \"prim\": \"code\"\n },\n {\n \"args\": [\n {\n \"string\": \"get\"\n },\n {\n \"prim\": \"unit\"\n },\n {\n \"prim\": \"int\"\n },\n [\n {\n \"prim\": \"CDR\"\n }\n ]\n ],\n \"prim\": \"view\"\n }\n ],\n \"storage\": {\n \"int\": \"12\"\n }\n },\n \"definition\": null,\n \"input\": null,\n \"key\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"options\": null\n },\n \"output\": {\n \"headers\": {\n \"requestId\": \"default:0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"type\": \"TransactionSuccess\"\n },\n \"protocolId\": \"ProxfordYmVfjWnRcgjWH36fW6PArwqykTFzotUxRs6gmTcZDuH\",\n \"transactionHash\": \"ootDut4xxR2yeYz6JuySuyTVZnXgda2t8SYrk3iuJpm531TZuCj\"\n },\n \"created\": \"2024-04-01T14:20:20.665039Z\",\n \"updated\": \"2024-04-01T14:20:20.665039Z\",\n \"detail\": {\n \"created\": \"2024-04-01T14:20:21.928976Z\",\n \"firstSubmit\": \"2024-04-01T14:20:22.714493Z\",\n \"from\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"gasPrice\": \"0\",\n \"historySummary\": [\n {\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:21.930764Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:21.930765Z\",\n \"subStatus\": \"Received\"\n },\n {\n \"action\": \"AssignNonce\",\n \"count\": 2,\n \"firstOccurrence\": \"2024-04-01T14:20:21.930767Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:22.714772Z\"\n },\n {\n \"action\": \"RetrieveGasPrice\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:22.714774Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:22.714774Z\"\n },\n {\n \"action\": \"SubmitTransaction\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:22.715269Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:22.715269Z\"\n },\n {\n \"action\": \"ReceiveReceipt\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:29.244396Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:29.244396Z\"\n },\n {\n \"action\": \"Confirm\",\n \"count\": 1,\n \"firstOccurrence\": \"2024-04-01T14:20:29.244762Z\",\n \"lastOccurrence\": \"2024-04-01T14:20:29.244762Z\"\n }\n ],\n \"id\": \"default:0c3810c7-baed-4077-9d2c-af316a4a567f\",\n \"lastSubmit\": \"2024-04-01T14:20:22.714493Z\",\n \"nonce\": \"23094946\",\n \"policyInfo\": {},\n \"receipt\": {\n \"blockHash\": \"BLvWL4t8GbaufGcQwiv3hHCsvgD6qwXfAXofyvojSMoFeGMXMR1\",\n \"blockNumber\": \"5868268\",\n \"contractLocation\": {\n \"address\": \"KT1CkTPsgTUQxR3CCpvtrcuQFV5Jf7cJgHFg\"\n },\n \"extraInfo\": [\n {\n \"consumedGas\": \"584\",\n \"contractAddress\": \"KT1CkTPsgTUQxR3CCpvtrcuQFV5Jf7cJgHFg\",\n \"counter\": null,\n \"errorMessage\": null,\n \"fee\": null,\n \"from\": null,\n \"gasLimit\": null,\n \"paidStorageSizeDiff\": \"75\",\n \"status\": \"applied\",\n \"storage\": null,\n \"storageLimit\": null,\n \"storageSize\": \"75\",\n \"to\": null\n }\n ],\n \"protocolId\": \"ProxfordYmVfjWnRcgjWH36fW6PArwqykTFzotUxRs6gmTcZDuH\",\n \"success\": true,\n \"transactionIndex\": \"0\"\n },\n \"sequenceId\": \"018e9a08-582a-01ec-9209-9d79ef742c9b\",\n \"status\": \"Succeeded\",\n \"transactionData\": \"c37274b662d68da8fdae2a02ad6c460a79933c70c6fa7500dc98a9ade6822f026d00673bb6e6298063f97940953de23d441ab20bf757f602a3cd810bad05b003000000000041020000003c0500045b00000004257365740501035b050202000000080316053d036d03420991000000130100000003676574036c035b020000000203170000000000000002000c\",\n \"transactionHash\": \"ootDut4xxR2yeYz6JuySuyTVZnXgda2t8SYrk3iuJpm531TZuCj\",\n \"transactionHeaders\": {\n \"from\": \"tz1V3spuktTP2wuEZP7D2hJruLZ5uJTuJk31\",\n \"nonce\": \"23094946\"\n },\n \"updated\": \"2024-04-01T14:20:29.245172Z\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#the-firefly-interface-format","title":"The FireFly Interface Format","text":"As we know from the previous section - smart contracts on the Tezos blockchain are using the domain-specific, stack-based programming language called Michelson. It is a key component of the Tezos platform and plays a fundamental role in defining the behavior of smart contracts and facilitating their execution. This language is very efficient but also a bit tricky and challenging for learning, so in order to teach FireFly how to interact with the smart contract, we will be using FireFly Interface (FFI) to define the contract inteface which later will be encoded to Michelson.
"},{"location":"tutorials/custom_contracts/tezos/#schema-details","title":"Schema details","text":"The details field is used to encapsulate blockchain specific type information about a specific field. (More details at schema details)
internalType is a field which is used to describe tezos primitive types
{\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\"\n }\n}\n internalSchema in turn is used to describe more complex tezos types as list, struct or variant
Struct example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"metadata\",\n \"type\": \"bytes\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n}\n List example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"metadata\",\n \"type\": \"bytes\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n}\n Variant example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"variant\",\n \"variants\": [\"add_operator\", \"remove_operator\"],\n \"args\": [\n {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"owner\",\n \"type\": \"address\"\n },\n {\n \"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n ]\n }\n }\n}\n Map example:
{\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"map\",\n \"args\": [\n {\n \"name\": \"key\",\n \"type\": \"integer\"\n },\n {\n \"name\": \"value\",\n \"type\": \"string\"\n }\n ]\n }\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#options","title":"Options","text":"Option type is used to indicate a value as optional (see more at smartpy options)
{\n \"details\": {\n \"type\": \"string\",\n \"internalType\": \"string\",\n \"kind\": \"option\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#fa2-example","title":"FA2 example","text":"The following FFI sample demonstrates the specification for the widely used FA2 (analogue of ERC721 for EVM) smart contract:
{\n \"namespace\": \"default\",\n \"name\": \"fa2\",\n \"version\": \"v1.0.0\",\n \"description\": \"\",\n \"methods\": [\n {\n \"name\": \"burn\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"token_ids\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"nat\",\n \"internalType\": \"nat\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"destroy\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": []\n },\n {\n \"name\": \"mint\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"owner\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\"\n }\n }\n },\n {\n \"name\": \"requests\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"metadata\",\n \"type\": \"bytes\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"pause\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"pause\",\n \"schema\": {\n \"type\": \"boolean\",\n \"details\": {\n \"type\": \"boolean\",\n \"internalType\": \"boolean\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"select\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"batch\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n },\n {\n \"name\": \"recipient\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id_start\",\n \"type\": \"nat\"\n },\n {\n \"name\": \"token_id_end\",\n \"type\": \"nat\"\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"transfer\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"batch\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"from_\",\n \"type\": \"address\"\n },\n {\n \"name\": \"txs\",\n \"type\": \"list\",\n \"args\": [\n {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"to_\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n },\n {\n \"name\": \"amount\",\n \"type\": \"nat\"\n }\n ]\n }\n ]\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"update_admin\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"admin\",\n \"schema\": {\n \"type\": \"string\",\n \"details\": {\n \"type\": \"address\",\n \"internalType\": \"address\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"update_operators\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"requests\",\n \"schema\": {\n \"type\": \"array\",\n \"details\": {\n \"type\": \"schema\",\n \"internalSchema\": {\n \"type\": \"variant\",\n \"variants\": [\"add_operator\", \"remove_operator\"],\n \"args\": [\n {\n \"type\": \"struct\",\n \"args\": [\n {\n \"name\": \"owner\",\n \"type\": \"address\"\n },\n {\n \"name\": \"operator\",\n \"type\": \"address\"\n },\n {\n \"name\": \"token_id\",\n \"type\": \"nat\"\n }\n ]\n }\n ]\n }\n }\n }\n }\n ],\n \"returns\": []\n }\n ],\n \"events\": []\n}\n"},{"location":"tutorials/custom_contracts/tezos/#broadcast-the-contract-interface","title":"Broadcast the contract interface","text":"Now that we have a FireFly Interface representation of our smart contract, we want to broadcast that to the entire network. This broadcast will be pinned to the blockchain, so we can always refer to this specific name and version, and everyone in the network will know exactly which contract interface we are talking about.
We will use the FFI JSON constructed above and POST that to the /contracts/interfaces API endpoint.
POST http://localhost:5000/api/v1/namespaces/default/contracts/interfaces
{\n \"namespace\": \"default\",\n \"name\": \"simplestorage\",\n \"version\": \"v1.0.0\",\n \"description\": \"\",\n \"methods\": [\n {\n \"name\": \"set\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"integer\",\n \"internalType\": \"integer\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"name\": \"get\",\n \"pathname\": \"\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": []\n }\n ],\n \"events\": []\n}\n"},{"location":"tutorials/custom_contracts/tezos/#response","title":"Response","text":"{\n \"id\": \"f9e34787-e634-46cd-af47-b52c537404ff\",\n \"namespace\": \"default\",\n \"name\": \"simplestorage\",\n \"description\": \"\",\n \"version\": \"v1.0.0\",\n \"methods\": [\n {\n \"id\": \"78f13a7f-7b85-47c3-bf51-346a9858c027\",\n \"interface\": \"f9e34787-e634-46cd-af47-b52c537404ff\",\n \"name\": \"set\",\n \"namespace\": \"default\",\n \"pathname\": \"set\",\n \"description\": \"\",\n \"params\": [\n {\n \"name\": \"newValue\",\n \"schema\": {\n \"type\": \"integer\",\n \"details\": {\n \"type\": \"integer\",\n \"internalType\": \"integer\"\n }\n }\n }\n ],\n \"returns\": []\n },\n {\n \"id\": \"ee864e25-c3f7-42d3-aefd-a82f753e9002\",\n \"interface\": \"f9e34787-e634-46cd-af47-b52c537404ff\",\n \"name\": \"get\",\n \"namespace\": \"tezos\",\n \"pathname\": \"get\",\n \"description\": \"\",\n \"params\": [],\n \"returns\": []\n }\n ]\n}\n NOTE: We can broadcast this contract interface conveniently with the help of FireFly Sandbox running at http://127.0.0.1:5108
Contracts SectionDefine a Contract InterfaceFFI - FireFly Interface in the Interface Fromat dropdownFFI JSON crafted by you into the Schema FieldRunNow comes the fun part where we see some of the powerful, developer-friendly features of FireFly. The next thing we're going to do is tell FireFly to build an HTTP API for this smart contract, complete with an OpenAPI Specification and Swagger UI. As part of this, we'll also tell FireFly where the contract is on the blockchain.
Like the interface broadcast above, this will also generate a broadcast which will be pinned to the blockchain so all the members of the network will be aware of and able to interact with this API.
We need to copy the id field we got in the response from the previous step to the interface.id field in the request body below. We will also pick a name that will be part of the URL for our HTTP API, so be sure to pick a name that is URL friendly. In this case we'll call it simple-storage. Lastly, in the location.address field, we're telling FireFly where an instance of the contract is deployed on-chain.
NOTE: The location field is optional here, but if it is omitted, it will be required in every request to invoke or query the contract. This can be useful if you have multiple instances of the same contract deployed to different addresses.
POST http://localhost:5000/api/v1/namespaces/default/apis
{\n \"name\": \"simple-storage\",\n \"interface\": {\n \"id\": \"f9e34787-e634-46cd-af47-b52c537404ff\"\n },\n \"location\": {\n \"address\": \"KT1ED4gj2xZnp8318yxa5NpvyvW15pqe4yFg\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#response_1","title":"Response","text":"{\n \"id\": \"af09de97-741d-4f61-8d30-4db5e7460f76\",\n \"namespace\": \"default\",\n \"interface\": {\n \"id\": \"f9e34787-e634-46cd-af47-b52c537404ff\"\n },\n \"location\": {\n \"address\": \"KT1ED4gj2xZnp8318yxa5NpvyvW15pqe4yFg\"\n },\n \"name\": \"simple-storage\",\n \"urls\": {\n \"openapi\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api/swagger.json\",\n \"ui\": \"http://127.0.0.1:5000/api/v1/namespaces/default/apis/simple-storage/api\"\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#view-openapi-spec-for-the-contract","title":"View OpenAPI spec for the contract","text":"You'll notice in the response body that there are a couple of URLs near the bottom. If you navigate to the one labeled ui in your browser, you should see the Swagger UI for your smart contract.
Now that we've got everything set up, it's time to use our smart contract! We're going to make a POST request to the invoke/set endpoint to set the integer value on-chain. Let's set it to the value of 3 right now.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/invoke/set
{\n \"input\": {\n \"newValue\": 3\n }\n}\n"},{"location":"tutorials/custom_contracts/tezos/#response_2","title":"Response","text":"{\n \"id\": \"87c7ee1b-33d1-46e2-b3f5-8566c14367cf\",\n \"type\": \"blockchain_invoke\",\n \"status\": \"Pending\",\n \"...\"\n}\n You'll notice that we got an ID back with status Pending, and that's expected due to the asynchronous programming model of working with smart contracts in FireFly. To see what the value is now, we can query the smart contract.
To make a read-only request to the blockchain to check the current value of the stored integer, we can make a POST to the query/get endpoint.
POST http://localhost:5000/api/v1/namespaces/default/apis/simple-storage/query/get
{}\n"},{"location":"tutorials/custom_contracts/tezos/#response_3","title":"Response","text":"{\n \"3\"\n}\n NOTE: Some contracts may have queries that require input parameters. That's why the query endpoint is a POST, rather than a GET so that parameters can be passed as JSON in the request body. This particular function does not have any parameters, so we just pass an empty JSON object.
Tokens are a critical building block in many blockchain-backed applications. Fungible tokens can represent a store of value or a means of rewarding participation in a multi-party system, while non-fungible tokens provide a clear way to identify and track unique entities across the network. FireFly provides flexible mechanisms to operate on any type of token and to tie those operations to on- and off-chain data.
Token pools are a FireFly construct for describing a set of tokens. The exact definition of a token pool is dependent on the token connector implementation. Some examples of how pools might map to various well-defined Ethereum standards:
These are provided as examples only - a custom token connector could be backed by any token technology (Ethereum or otherwise) as long as it can support the basic operations described here (create pool, mint, burn, transfer). Other FireFly repos include a sample implementation of a token connector for ERC-20 and ERC-721 as well as ERC-1155.
"},{"location":"tutorials/tokens/erc1155/","title":"Use ERC-1155 tokens","text":""},{"location":"tutorials/tokens/erc1155/#previous-steps-install-the-firefly-cli","title":"Previous steps: Install the FireFly CLI","text":"If you haven't set up the FireFly CLI already, please go back to the Getting Started guide and read the section on how to Install the FireFly CLI.
\u2190 \u2460 Install the FireFly CLI
"},{"location":"tutorials/tokens/erc1155/#create-a-stack-with-an-erc-1155-connector","title":"Create a stack with an ERC-1155 connector","text":"The default token connector that the FireFly CLI sets up is for ERC-20 and ERC-721. If you would like to work with ERC-1155 tokens, you need to create a stack that is configured to use that token connector. To do that, run:
ff init ethereum -t erc-1155\n Then run:
ff start <your_stack_name>\n"},{"location":"tutorials/tokens/erc1155/#about-the-sample-token-contract","title":"About the sample token contract","text":"When the FireFly CLI set up your FireFly stack, it also deployed a sample ERC-1155 contract that conforms to the expectations of the token connector. When you create a token pool through FireFly's token APIs, that contract will be used by default.
\u26a0\ufe0f WARNING: The default token contract that was deployed by the FireFly CLI is only provided for the purpose of learning about FireFly. It is not a production grade contract. If you intend to deploy a production application using tokens on FireFly, you should research token contract best practices. For details, please see the source code for the contract that was deployed."},{"location":"tutorials/tokens/erc1155/#use-the-sandbox-optional","title":"Use the Sandbox (optional)","text":"At this point you could open the Sandbox at http://127.0.0.1:5109/home?action=tokens.pools and perform the functions outlined in the rest of this guide. Or you can keep reading to learn how to build HTTP requests to work with tokens in FireFly.
"},{"location":"tutorials/tokens/erc1155/#create-a-pool-using-default-token-contract","title":"Create a pool (using default token contract)","text":"After your stack is up and running, the first thing you need to do is create a token pool. Every application will need at least one token pool. At a minimum, you must always specify a name and type (fungible or nonfungible) for the pool.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\"\n}\n Other parameters:
connector if you have configured multiple token connectorsconfig object of additional parameters, if supported by your token connectorkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityIf you wish to use a contract that is already on the chain, it is recommended that you first upload the ABI for your specific contract by creating a FireFly contract interface. This step is optional if you're certain that your ERC-1155 ABI conforms to the default expectations of the token connector, but is generally recommended.
See the README of the token connector for details on what contract variants can currently be understood.
You can pass a config object with an address when you make the request to create the token pool, and if you created a contract interface, you can include the interface ID as well.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\",\n \"interface\": {\n \"id\": \"b9e5e1ce-97bb-4a35-a25c-52c7c3f523d8\"\n },\n \"config\": {\n \"address\": \"0xb1C845D32966c79E23f733742Ed7fCe4B41901FC\"\n }\n}\n"},{"location":"tutorials/tokens/erc1155/#mint-tokens","title":"Mint tokens","text":"Once you have a token pool, you can mint tokens within it. With the default sample contract, only the creator of a pool is allowed to mint - but each contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/mint
{\n \"amount\": 10\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityto if you'd like to send the minted tokens to a specific identity (default is the same as key)You may transfer tokens within a pool by specifying an amount and a destination understood by the connector (i.e. an Ethereum address). With the default sample contract, only the owner of a token or another approved account may transfer it away - but each contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\"\n}\n NOTE: When transferring a non-fungible token, the amount must always be 1. The tokenIndex field is also required when transferring a non-fungible token.
Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to send tokens from a specific identity (default is the same as key)All transfers (as well as mint/burn operations) support an optional message parameter that contains a broadcast or private message to be sent along with the transfer. This message follows the same convention as other FireFly messages, and may be comprised of text or blob data, and can provide context, metadata, or other supporting information about the transfer. The message will be batched, hashed, and pinned to the primary blockchain.
The message ID and hash will also be sent to the token connector as part of the transfer operation, to be written to the token blockchain when the transaction is submitted. All recipients of the message will then be able to correlate the message with the token transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n"},{"location":"tutorials/tokens/erc1155/#private-message","title":"Private message","text":"{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"header\": {\n \"type\": \"transfer_private\"\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n Note that all parties in the network will be able to see the transfer (including the message ID and hash), but only the recipients of the message will be able to view the actual message data.
"},{"location":"tutorials/tokens/erc1155/#burn-tokens","title":"Burn tokens","text":"You may burn tokens by simply specifying an amount. With the default sample contract, only the owner of a token or another approved account may burn it - but each connector may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/burn
{\n \"amount\": 1\n}\n NOTE: When burning a non-fungible token, the amount must always be 1. The tokenIndex field is also required when burning a non-fungible token.
Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to burn tokens from a specific identity (default is the same as key)You can also approve other wallets to transfer tokens on your behalf with the /approvals API. The important fields in a token approval API request are as follows:
approved: Sets whether another account is allowed to transfer tokens out of this wallet or not. If not specified, will default to true. Setting to false can revoke an existing approval.operator: The other account that is allowed to transfer tokens out of the wallet specified in the key fieldkey: The wallet address for the approval. If not set, it defaults to the address of the FireFly node submitting the transactionHere is an example request that would let the signing account 0x634ee8c7d0894d086c7af1fc8514736aed251528 transfer any amount of tokens from my wallet
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/approvals
{\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\"\n}\n"},{"location":"tutorials/tokens/erc1155/#response","title":"Response","text":"{\n \"localId\": \"46fef50a-cf93-4f92-acf8-fae161b37362\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc1155\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"approved\": true,\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"00faa011-f42c-403d-a047-2df7318967cd\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/","title":"Use ERC-20 tokens","text":""},{"location":"tutorials/tokens/erc20/#previous-steps-start-your-environment","title":"Previous steps: Start your environment","text":"If you haven't started a FireFly stack already, please go to the Getting Started guide on how to Start your environment. This will set up a token connector that works with both ERC-20 and ERC-721 by default.
\u2190 \u2461 Start your environment
"},{"location":"tutorials/tokens/erc20/#about-the-sample-token-contracts","title":"About the sample token contracts","text":"If you are using the default ERC-20 / ERC-721 token connector, when the FireFly CLI set up your FireFly stack, it also deployed a token factory contract. When you create a token pool through FireFly's token APIs, the token factory contract will automatically deploy an ERC-20 or ERC-721 contract, based on the pool type in the API request.
At this point you could open the Sandbox at http://127.0.0.1:5109/home?action=tokens.pools and perform the functions outlined in the rest of this guide. Or you can keep reading to learn how to build HTTP requests to work with tokens in FireFly.
"},{"location":"tutorials/tokens/erc20/#create-a-pool-using-default-token-factory","title":"Create a pool (using default token factory)","text":"After your stack is up and running, the first thing you need to do is create a token pool. Every application will need at least one token pool. At a minimum, you must always specify a name and type for the pool.
If you're using the default ERC-20 / ERC-721 token connector and its sample token factory, it will automatically deploy a new ERC-20 contract instance.
"},{"location":"tutorials/tokens/erc20/#request","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\"\n}\n"},{"location":"tutorials/tokens/erc20/#response","title":"Response","text":"{\n \"id\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"type\": \"fungible\",\n \"namespace\": \"default\",\n \"name\": \"testpool\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"connector\": \"erc20_erc721\",\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"e901921e-ffc4-4776-b20a-9e9face70a47\"\n },\n \"published\": true\n}\n Other parameters:
connector if you have configured multiple token connectorsconfig object of additional parameters, if supported by your token connectorkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityTo lookup the address of the new contract, you can lookup the token pool by its ID on the API. Creating the token pool will also emit an event which will contain the address. To query the token pool you can make a GET request to the pool's ID:
GET http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools/5811e8d5-52d0-44b1-8b75-73f5ff88f598
{\n \"id\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"type\": \"fungible\",\n \"namespace\": \"default\",\n \"name\": \"testpool\",\n \"standard\": \"ERC20\",\n \"locator\": \"address=0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c&schema=ERC20WithData&type=fungible\",\n \"decimals\": 18,\n \"connector\": \"erc20_erc721\",\n \"message\": \"7e2f6004-31fd-4ba8-9845-15c5fe5fbcd7\",\n \"state\": \"confirmed\",\n \"created\": \"2022-04-28T14:03:16.732222381Z\",\n \"info\": {\n \"address\": \"0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c\",\n \"name\": \"testpool\",\n \"schema\": \"ERC20WithData\"\n },\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"e901921e-ffc4-4776-b20a-9e9face70a47\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#create-a-pool-from-a-deployed-token-contract","title":"Create a pool (from a deployed token contract)","text":"If you wish to index and use a contract that is already on the chain, it is recommended that you first upload the ABI for your specific contract by creating a FireFly contract interface. This step is optional if you're certain that your ERC-20 ABI conforms to the default expectations of the token connector, but is generally recommended.
See the README of the token connector for details on what contract variants can currently be understood.
You can pass a config object with an address and blockNumber when you make the request to create the token pool, and if you created a contract interface, you can include the interface ID as well.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"fungible\",\n \"interface\": {\n \"id\": \"b9e5e1ce-97bb-4a35-a25c-52c7c3f523d8\"\n },\n \"config\": {\n \"address\": \"0xb1C845D32966c79E23f733742Ed7fCe4B41901FC\",\n \"blockNumber\": \"0\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#mint-tokens","title":"Mint tokens","text":"Once you have a token pool, you can mint tokens within it. When using the sample contract deployed by the CLI, only the creator of a pool is allowed to mint, but a different contract may define its own permission model.
NOTE: The default sample contract uses 18 decimal places. This means that if you want to create 100 tokens, the number submitted to the API / blockchain should actually be 100\u00d71018 = 100000000000000000000. This allows users to work with \"fractional\" tokens even though Ethereum virtual machines only support integer arithmetic.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/mint
{\n \"amount\": \"100000000000000000000\"\n}\n"},{"location":"tutorials/tokens/erc20/#response_2","title":"Response","text":"{\n \"type\": \"mint\",\n \"localId\": \"835fe2a1-594b-4336-bc1d-b2f59d51064b\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"amount\": \"100000000000000000000\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"3fc97e24-fde1-4e80-bd82-660e479c0c43\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityto if you'd like to send the minted tokens to a specific identity (default is the same as key)You may transfer tokens within a pool by specifying an amount and a destination understood by the connector (i.e. an Ethereum address). With the default sample contract, only the owner of the tokens or another approved account may transfer their tokens, but a different contract may define its own permission model.
"},{"location":"tutorials/tokens/erc20/#request_4","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": \"10000000000000000000\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\"\n}\n"},{"location":"tutorials/tokens/erc20/#response_3","title":"Response","text":"{\n \"type\": \"transfer\",\n \"localId\": \"61f0a71f-712b-4778-8b37-784fbee52657\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\",\n \"amount\": \"10000000000000000000\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"c0c316a3-23a9-42f3-89b3-1cfdba6c948d\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to send tokens from a specific identity (default is the same as key)All transfers (as well as mint/burn operations) support an optional message parameter that contains a broadcast or private message to be sent along with the transfer. This message follows the same convention as other FireFly messages, and may be comprised of text or blob data, and can provide context, metadata, or other supporting information about the transfer. The message will be batched, hashed, and pinned to the primary blockchain.
The message ID and hash will also be sent to the token connector as part of the transfer operation, to be written to the token blockchain when the transaction is submitted. All recipients of the message will then be able to correlate the message with the token transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n"},{"location":"tutorials/tokens/erc20/#private-message","title":"Private message","text":"{\n \"amount\": 1,\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"header\": {\n \"type\": \"transfer_private\"\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n Note that all parties in the network will be able to see the transfer (including the message ID and hash), but only the recipients of the message will be able to view the actual message data.
"},{"location":"tutorials/tokens/erc20/#burn-tokens","title":"Burn tokens","text":"You may burn tokens by simply specifying an amount. With the default sample contract, only the owner of a token or another approved account may burn it, but a different contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/burn
{\n \"amount\": 1\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to burn tokens from a specific identity (default is the same as key)You can also approve other wallets to transfer tokens on your behalf with the /approvals API. The important fields in a token approval API request are as follows:
approved: Sets whether another account is allowed to transfer tokens out of this wallet or not. If not specified, will default to true. Setting to false can revoke an existing approval.operator: The other account that is allowed to transfer tokens out of the wallet specified in the key field.config.allowance: The number of tokens the other account is allowed to transfer. If 0 or not set, the approval is valid for any number.key: The wallet address for the approval. If not set, it defaults to the address of the FireFly node submitting the transaction.Here is an example request that would let the signing account 0x634ee8c7d0894d086c7af1fc8514736aed251528 transfer up to 10\u00d71018 (10000000000000000000) tokens from my wallet
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/approvals
{\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"config\": {\n \"allowance\": \"10000000000000000000\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#response_4","title":"Response","text":"{\n \"localId\": \"46fef50a-cf93-4f92-acf8-fae161b37362\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"approved\": true,\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"00faa011-f42c-403d-a047-2df7318967cd\"\n },\n \"config\": {\n \"allowance\": \"10000000000000000000\"\n }\n}\n"},{"location":"tutorials/tokens/erc20/#use-metamask","title":"Use Metamask","text":"Now that you have an ERC-20 contract up and running, you may be wondering how to use Metamask (or some other wallet) with this contract. This section will walk you through how to connect Metamask to the blockchain and token contract that FireFly is using.
"},{"location":"tutorials/tokens/erc20/#configure-a-new-network","title":"Configure a new network","text":"The first thing we need to do is tell Metamask how to connect to our local blockchain node. To do that:
In the drop down menu, click Settings
On the left hand side of the page, click Networks
Click the Add a network button
Fill in the network details:
FireFly (could be any name)http://127.0.0.1:51002021Metamask won't know about our custom ERC-20 contract until we give it the Ethereum address for the contract, so that's what we'll do next.
Click on Import tokens
Enter the Ethereum address of the contract
NOTE: You can find the address of your contract from the response to the request to create the token pool above. You can also do a GET to http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools to lookup your configured token pools.
Now you can copy your account address from your Metamask wallet, and perform a transfer from FireFly's API (as described above) to your Metamask address.
After a couple seconds, you should see your tokens show up in your Metamask wallet.
You can also send tokens to a FireFly address or any other Ethereum address from your Metamask wallet.
NOTE: You can find the Ethereum addresses for organizations in your FireFly network in the Network \u2192 Organizations page in the FireFly explorer. Click on an organization and look under the Verifiers header for the organization's Ethereum address.
"},{"location":"tutorials/tokens/erc721/","title":"Use ERC-721 tokens","text":""},{"location":"tutorials/tokens/erc721/#previous-steps-start-your-environment","title":"Previous steps: Start your environment","text":"If you haven't started a FireFly stack already, please go to the Getting Started guide on how to Start your environment. This will set up a token connector that works with both ERC-20 and ERC-721 by default.
\u2190 \u2461 Start your environment
"},{"location":"tutorials/tokens/erc721/#about-the-sample-token-contracts","title":"About the sample token contracts","text":"If you are using the default ERC-20 / ERC-721 token connector, when the FireFly CLI set up your FireFly stack, it also deployed a token factory contract. When you create a token pool through FireFly's token APIs, the token factory contract will automatically deploy an ERC-20 or ERC-721 contract, based on the pool type in the API request.
At this point you could open the Sandbox at http://127.0.0.1:5109/home?action=tokens.pools and perform the functions outlined in the rest of this guide. Or you can keep reading to learn how to build HTTP requests to work with tokens in FireFly.
"},{"location":"tutorials/tokens/erc721/#create-a-pool-using-default-token-factory","title":"Create a pool (using default token factory)","text":"After your stack is up and running, the first thing you need to do is create a token pool. Every application will need at least one token pool. At a minimum, you must always specify a name and type for the pool.
If you're using the default ERC-20 / ERC-721 token connector and its sample token factory, it will automatically deploy a new ERC-721 contract instance.
"},{"location":"tutorials/tokens/erc721/#request","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"type\": \"nonfungible\",\n \"name\": \"nfts\"\n}\n"},{"location":"tutorials/tokens/erc721/#response","title":"Response","text":"{\n \"id\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"type\": \"nonfungible\",\n \"namespace\": \"default\",\n \"name\": \"nfts\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"connector\": \"erc20_erc721\",\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"00678116-89d2-4295-990c-bd5ffa6e2434\"\n },\n \"published\": true\n}\n Other parameters:
connector if you have configured multiple token connectorsconfig object of additional parameters, if supported by your token connectorkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityTo lookup the address of the new contract, you can lookup the token pool by its ID on the API. Creating the token pool will also emit an event which will contain the address. To query the token pool you can make a GET request to the pool's ID:
GET http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools/5811e8d5-52d0-44b1-8b75-73f5ff88f598
{\n \"id\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"type\": \"nonfungible\",\n \"namespace\": \"default\",\n \"name\": \"nfts\",\n \"standard\": \"ERC721\",\n \"locator\": \"address=0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c&schema=ERC721WithData&type=nonfungible\",\n \"connector\": \"erc20_erc721\",\n \"message\": \"53d95dda-e8ca-4546-9226-a0fdc6ec03ec\",\n \"state\": \"confirmed\",\n \"created\": \"2022-04-29T12:03:51.971349509Z\",\n \"info\": {\n \"address\": \"0xc4d02efcfab06f18ec0a68e00b98ffecf6bf7e3c\",\n \"name\": \"nfts\",\n \"schema\": \"ERC721WithData\"\n },\n \"tx\": {\n \"type\": \"token_pool\",\n \"id\": \"00678116-89d2-4295-990c-bd5ffa6e2434\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#create-a-pool-from-a-deployed-token-contract","title":"Create a pool (from a deployed token contract)","text":"If you wish to index and use a contract that is already on the chain, it is recommended that you first upload the ABI for your specific contract by creating a FireFly contract interface. This step is optional if you're certain that your ERC-721 ABI conforms to the default expectations of the token connector, but is generally recommended.
See the README of the token connector for details on what contract variants can currently be understood.
You can pass a config object with an address and blockNumber when you make the request to create the token pool, and if you created a contract interface, you can include the interface ID as well.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools?publish=true
NOTE: Without passing the query parameter publish=true when the token pool is created, it will initially be unpublished and not broadcasted to other members of the network (if configured in multi-party). To publish the token pool, a subsequent API call would need to be made to /tokens/pools/{nameOrId}/publish
{\n \"name\": \"testpool\",\n \"type\": \"nonfungible\",\n \"interface\": {\n \"id\": \"b9e5e1ce-97bb-4a35-a25c-52c7c3f523d8\"\n },\n \"config\": {\n \"address\": \"0xb1C845D32966c79E23f733742Ed7fCe4B41901FC\",\n \"blockNumber\": \"0\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#mint-a-token","title":"Mint a token","text":"Once you have a token pool, you can mint tokens within it. When using the sample contract deployed by the CLI, the following are true:
tokenIndex must be set to a unique valueamount must be 1A different ERC-721 contract may define its own requirements.
"},{"location":"tutorials/tokens/erc721/#request_3","title":"Request","text":"POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/mint
{\n \"amount\": \"1\",\n \"tokenIndex\": \"1\"\n}\n"},{"location":"tutorials/tokens/erc721/#response_2","title":"Response","text":"{\n \"type\": \"mint\",\n \"localId\": \"2de2e05e-9474-4a08-a64f-2cceb076bdaa\",\n \"pool\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"amount\": \"1\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"0fad4581-7cb2-42c7-8f78-62d32205c2c2\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityto if you'd like to send the minted tokens to a specific identity (default is the same as key)You may transfer tokens within a pool by specifying an amount and a destination understood by the connector (i.e. an Ethereum address). With the default sample contract, only the owner of the tokens or another approved account may transfer their tokens, but a different contract may define its own permission model.
When transferring an NFT, you must also specify the tokenIndex that you wish to transfer. The tokenIndex is simply the ID of the specific NFT within the pool that you wish to transfer.
NOTE: When transferring NFTs the amount must be 1. If you wish to transfer more NFTs, simply call the endpoint multiple times, specifying the token index of each token to transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": \"1\",\n \"tokenIndex\": \"1\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\"\n}\n"},{"location":"tutorials/tokens/erc721/#response_3","title":"Response","text":"{\n \"type\": \"transfer\",\n \"localId\": \"f5fd0d13-db13-4d70-9a99-6bcd747f1e42\",\n \"pool\": \"a92a0a25-b886-4b43-931f-4add2840258a\",\n \"tokenIndex\": \"1\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"from\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"to\": \"0xa4222a4ae19448d43a338e6586edd5fb2ac398e1\",\n \"amount\": \"1\",\n \"tx\": {\n \"type\": \"token_transfer\",\n \"id\": \"63c1a89b-240c-41eb-84bb-323d56f4ba5a\"\n }\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to send tokens from a specific identity (default is the same as key)All transfers (as well as mint/burn operations) support an optional message parameter that contains a broadcast or private message to be sent along with the transfer. This message follows the same convention as other FireFly messages, and may be comprised of text or blob data, and can provide context, metadata, or other supporting information about the transfer. The message will be batched, hashed, and pinned to the primary blockchain.
The message ID and hash will also be sent to the token connector as part of the transfer operation, to be written to the token blockchain when the transaction is submitted. All recipients of the message will then be able to correlate the message with the token transfer.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/transfers
{\n \"amount\": 1,\n \"tokenIndex\": \"1\",\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n"},{"location":"tutorials/tokens/erc721/#private-message","title":"Private message","text":"{\n \"amount\": 1,\n \"tokenIndex\": \"1\",\n \"to\": \"0x07eab7731db665caf02bc92c286f51dea81f923f\",\n \"message\": {\n \"header\": {\n \"type\": \"transfer_private\"\n },\n \"group\": {\n \"members\": [\n {\n \"identity\": \"org_1\"\n }\n ]\n },\n \"data\": [\n {\n \"value\": \"payment for goods\"\n }\n ]\n }\n}\n Note that all parties in the network will be able to see the transfer (including the message ID and hash), but only the recipients of the message will be able to view the actual message data.
"},{"location":"tutorials/tokens/erc721/#burn-tokens","title":"Burn tokens","text":"You may burn a token by specifying the token's tokenIndex. With the default sample contract, only the owner of a token or another approved account may burn it, but a different contract may define its own permission model.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/burn
{\n \"amount\": 1,\n \"tokenIndex\": \"1\"\n}\n Other parameters:
pool name if you've created more than one poolkey understood by the connector (i.e. an Ethereum address) if you'd like to use a non-default signing identityfrom if you'd like to burn tokens from a specific identity (default is the same as key)You can also approve other wallets to transfer tokens on your behalf with the /approvals API. The important fields in a token approval API request are as follows:
approved: Sets whether another account is allowed to transfer tokens out of this wallet or not. If not specified, will default to true. Setting to false can revoke an existing approval.operator: The other account that is allowed to transfer tokens out of the wallet specified in the key fieldconfig.tokenIndex: The specific token index within the pool that the operator is allowed to transfer. If 0 or not set, the approval is valid for all tokens.key: The wallet address for the approval. If not set, it defaults to the address of the FireFly node submitting the transactionHere is an example request that would let the signing account 0x634ee8c7d0894d086c7af1fc8514736aed251528 transfer tokenIndex 2 from my wallet.
POST http://127.0.0.1:5000/api/v1/namespaces/default/tokens/approvals
{\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"config\": {\n \"tokenIndex\": \"2\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#response_4","title":"Response","text":"{\n \"localId\": \"46fef50a-cf93-4f92-acf8-fae161b37362\",\n \"pool\": \"e1477ed5-7282-48e5-ad9d-1612296bb29d\",\n \"connector\": \"erc20_erc721\",\n \"key\": \"0x14ddd36a0c2f747130915bf5214061b1e4bec74c\",\n \"operator\": \"0x634ee8c7d0894d086c7af1fc8514736aed251528\",\n \"approved\": true,\n \"tx\": {\n \"type\": \"token_approval\",\n \"id\": \"00faa011-f42c-403d-a047-2df7318967cd\"\n },\n \"config\": {\n \"tokenIndex\": \"2\"\n }\n}\n"},{"location":"tutorials/tokens/erc721/#use-metamask","title":"Use Metamask","text":"Now that you have an ERC-721 contract up and running, you may be wondering how to use Metamask (or some other wallet) with this contract. This section will walk you through how to connect Metamask to the blockchain and token contract that FireFly is using.
"},{"location":"tutorials/tokens/erc721/#configure-a-new-network","title":"Configure a new network","text":"The first thing we need to do is tell Metamask how to connect to our local blockchain node. To do that:
In the drop down menu, click Settings
On the left hand side of the page, click Networks
Click the Add a network button
Fill in the network details:
FireFly (could be any name)http://127.0.0.1:51002021Metamask won't know about our custom ERC-721 contract until we give it the Ethereum address for the contract, so that's what we'll do next.
Click on Import tokens
Enter the Ethereum address of the contract
NOTE: You can find the address of your contract from the response to the request to create the token pool above. You can also do a GET to http://127.0.0.1:5000/api/v1/namespaces/default/tokens/pools to lookup your configured token pools.
Now you can copy your account address from your Metamask wallet, and perform a transfer from FireFly's API (as described above) to your Metamask address.
After a couple seconds, you should see your token show up in your Metamask wallet.
NOTE: While the NFT token balance can be viewed in Metamask, it does not appear that Metamask supports sending these tokens to another address at this time.
"}]} \ No newline at end of file