欢迎来到 Corda!¶
Corda is an open-source blockchain platform. If you’d like a quick introduction to blockchains and how Corda is different, then watch this short video:
Corda 是一个开源的区块链平台。如果你想看到一个关于区块链以及 Corda 有哪些特性的简单介绍的话,那么你可以看看下边这个简短的视频:
Want to see Corda running? Download our demonstration application DemoBench or follow our quickstart guide.
想看看 Corda 是如何运行的?下载我们的展示应用 DemoBench 或者参考我们的 快速开始指南。
If you want to start coding on Corda, then familiarise yourself with the key concepts, then read our Hello, World! tutorial. For the background behind Corda, read the non-technical `platform white paper`_ or for more detail, the `technical white paper`_.
如果你希望开始进行 Corda 开发的话,让你自己先熟悉一下 核心概念,并且读一读我们的 Hello, World! tutorial。关于在 Corda 背后的背景,阅读以下非技术的 平台白皮书 或者有更多详细内容的 技术白皮书。
If you have questions or comments, then get in touch on Slack or ask a question on Stack Overflow .
如果你有问题或者是评论的话,可以在 Slack 上联系,或者在 Stack Overflow 上提问题。
We look forward to seeing what you can do with Corda!
我们期待看到你将用 Corda 来实现什么!
注解
You can read this site offline. Either `download the PDF`_ or download the Corda source code, run gradle buildDocs
and you will have
a copy of this site in the docs/build/html
directory.
注解
你可以用离线的方式阅读这个网站。或者 下载 PDF 文档 或者下载 Corda 的源代码,运行 gradle buildDocs
然后在 docs/build/html
你就能得到这个网站的拷贝了。
Release notes for Corda 4¶
Welcome to the Corda 4 release notes. Please read these carefully to understand what’s new in this release and how the changes can help you. Just as prior releases have brought with them commitments to wire and API stability, Corda 4 comes with those same guarantees. States and apps valid in Corda 3 are transparently usable in Corda 4.
For app developers, we strongly recommend reading “Upgrading apps to Corda 4”. This covers the upgrade procedure, along with how you can adjust your app to opt-in to new features making your app more secure and easier to upgrade in future.
For node operators, we recommend reading “Upgrading your node to Corda 4”. The upgrade procedure is simple but it can’t hurt to read the instructions anyway.
Additionally, be aware that the data model improvements are changes to the Corda consensus rules. To use apps that benefit from them, all nodes in a compatibility zone must be upgraded and the zone must be enforcing that upgrade. This may take time in large zones like the testnet. Please take this into account for your own schedule planning.
警告
There is a bug in Corda 3.3 that causes problems when receiving a FungibleState
created
by Corda 4. There will shortly be a followup Corda 3.4 release that corrects this error. Interop between
Corda 3 and Corda 4 will require that Corda 3 users are on the latest patchlevel release.
目录
- Release notes for Corda 4
- Changes for developers in Corda 4
- Changes for administrators in Corda 4
- Official Docker images
- Auto-acceptance for network parameters updates
- Automatic error codes
- Standardisation of command line argument handling
- Liquibase for database schema upgrades
- Ability to pre-validate configuration files
- Flow control for notaries
- Retirement of non-elliptic Diffie-Hellman for TLS
- Miscellaneous changes
Changes for developers in Corda 4¶
Reference states¶
With Corda 4 we are introducing the concept of “reference input states”. These allow smart contracts to reference data from the ledger in a transaction without simultaneously updating it. They’re useful not only for any kind of reference data such as rates, healthcare codes, geographical information etc, but for anywhere you might have used a SELECT JOIN in a SQL based app.
A reference input state is a ContractState
which can be referred to in a transaction by the contracts
of input and output states but, significantly, whose contract is not executed as part of the transaction
verification process and is not consumed when the transaction is committed to the ledger. Rather, it is checked
for “current-ness”. In other words, the contract logic isn’t run for the referencing transaction only.
Since they’re normal states, if they do occur in the input or output positions, they can evolve on the ledger,
modeling reference data in the real world.
Signature constraints¶
CorDapps built by the corda-gradle-plugins
are now signed and sealed JAR files by default. This
signing can be configured or disabled with the default certificate being the Corda development certificate.
When an app is signed, that automatically activates the use of signature constraints, which are an important part of the Corda security and upgrade plan. They allow states to express what contract logic governs them socially, as in “any contract JAR signed by a threshold of these N keys is suitable”, rather than just by hash or via zone whitelist rules, as in previous releases.
We strongly recommend all apps be signed and use signature constraints going forward.
Learn more about this new feature by reading the Upgrading apps to Corda 4.
State pointers¶
State 指针 formalize a recommended design pattern, in which states may refer to other states
on the ledger by StateRef
(a pair of transaction hash and output index that is sufficient to locate
any information on the global ledger). State pointers work together with the reference states feature
to make it easy for data to point to the latest version of any other piece of data, with the right
version being automatically incorporated into transactions for you.
New network builder tool¶
A new graphical tool for building test Corda networks has been added. It can build Docker images for local deployment and can also remotely control Microsoft Azure, to create a test network in the cloud.
Learn more on the Corda Network Builder page.

JPA access in flows and services¶
Corda 3 provides the jdbcConnection
API on FlowLogic
to give access to an active connection to your
underlying database. It is fully intended that apps can store their own data in their own tables in the
node database, so app-specific tables can be updated atomically with the ledger data itself. But JDBC is
not always convenient, so in Corda 4 we are additionally exposing the Java Persistence Architecture, for
object-relational mapping. The new ServiceHub.withEntityManager
API lets you load and persist entity
beans inside your flows and services.
Please do write apps that read and write directly to tables running alongside the node’s own tables. Using SQL is a convenient and robust design pattern for accessing data on or off the ledger.
重要
Please do not attempt to write to tables starting with node_
or contract_
as those
are maintained by the node. Additionally, the node_
tables are private to Corda and should not be
directly accessed at all. Tables starting with contract_
are generated by apps and are designed to
be queried by end users, GUIs, tools etc.
Security upgrades¶
Sealing. Sealed JARs are a security upgrade that ensures JARs cannot define classes in each other’s packages, thus ensuring Java’s package-private visibility feature works. The Gradle plugins now seal your JARs by default.
BelongsToContract annotation. CorDapps are currently expected to verify that the right contract is named in each state object. This manual step is easy to miss, which would make the app less secure in a network where you trade with potentially malicious counterparties. The platform now handles this for you by allowing you to annotate states with which contract governs them. If states are inner classes of a contract class, this association is automatic. See API: 合约约束 for more information.
Two-sided FinalityFlow and SwapIdentitiesFlow. The previous FinalityFlow
API was insecure because
nodes would accept any finalised transaction, outside of the context of a containing flow. This would
allow transactions to be sent to a node bypassing things like business network membership checks. The
same applies for the SwapIdentitiesFlow
in the confidential-identities module. A new API has been
introduced to allow secure use of this flow.
Package namespace ownership. Corda 4 allows app developers to register their keys and Java package namespaces
with the zone operator. Any JAR that defines classes in these namespaces will have to be signed by those keys.
This is an opt-in feature designed to eliminate potential confusion that could arise if a malicious
developer created classes in other people’s package namespaces (e.g. an attacker creating a state class
called com.megacorp.exampleapp.ExampleState
). Whilst Corda’s attachments feature would stop the
core ledger getting confused by this, tools and formats that connect to the node may not be designed to consider
attachment hashes or signing keys, and rely more heavily on type names instead. Package namespace ownership
allows tool developers to assume that if a class name appears to be owned by an organisation, then the
semantics of that class actually were defined by that organisation, thus eliminating edge cases that
might otherwise cause confusion.
Network parameters in transactions¶
Transactions created under a Corda 4+ node will have the currently valid signed NetworkParameters
file attached to each transaction. This will allow future introspection of states to ascertain what was
the accepted global state of the network at the time they were notarised. Additionally, new signatures must
be working with the current globally accepted parameters. The notary signing a transaction will check that
it does indeed reference the current in-force network parameters, meaning that old (and superseded) network
parameters can not be used to create new transactions.
RPC upgrades¶
AMQP/1.0 is now default serialization framework across all of Corda (checkpointing aside), swapping the RPC framework from using the older Kryo implementation. This means Corda open source and Enterprise editions are now RPC wire compatible and either client library can be used. We previously started using AMQP/1.0 for the peer to peer protocol in Corda 3.
Class synthesis. The RPC framework supports the “class carpenter” feature. Clients can now freely download and deserialise objects, such as contract states, for which the defining class files are absent from their classpath. Definitions for these classes will be synthesised on the fly from the binary schemas embedded in the messages. The resulting dynamically created objects can then be fed into any framework that uses reflection, such as XML formatters, JSON libraries, GUI construction toolkits, scripting engines and so on. This approach is how the Blob 查看器 tool works - it simply deserialises a message and then feeds the resulting synthetic class graph into a JSON or YAML serialisation framework.
Class synthesis will use interfaces that are implemented by the original objects if they are found on the classpath. This is designed to enable generic programming. For example, if your industry has standardised a thin Java API with interfaces that expose JavaBean style properties (get/is methods), then you can have that JAR on the classpath of your tool and cast the deserialised objects to those interfaces. In this way you can work with objects from apps you aren’t aware of.
SSL. The Corda RPC infrastructure can now be configured to utilise SSL for additional security. The operator of a node wishing to enable this must of course generate and distribute a certificate in order for client applications to successfully connect. This is documented here Using the client RPC API
Preview of the deterministic DJVM¶
It is important that all nodes that process a transaction always agree on whether it is valid or not. Because transaction types are defined using JVM byte code, this means that the execution of that byte code must be fully deterministic. Out of the box a standard JVM is not fully deterministic, thus we must make some modifications in order to satisfy our requirements.
This version of Corda introduces a standalone Deterministic JVM. It isn’t yet integrated with the rest of the platform. It will eventually become a part of the node and enforce deterministic and secure execution of smart contract code, which is mobile and may propagate around the network without human intervention.
Currently, it is released as an evaluation version. We want to give developers the ability to start trying it out and get used to developing deterministic code under the set of constraints that we envision will be placed on contract code in the future. There are some instructions on how to get started with the DJVM command-line tool, which allows you to run code in a deterministic sandbox and inspect the byte code transformations that the DJVM applies to your code. Read more in “Deterministic JVM”.
Configurable flow responders¶
In Corda 4 it is possible for flows in one app to subclass and take over flows from another. This allows you to create generic, shared flow logic that individual users can customise at pre-agreed points (protected methods). For example, a site-specific app could be developed that causes transaction details to be converted to a PDF and sent to a particular printer. This would be an inappropriate feature to put into shared business logic, but it makes perfect sense to put into a user-specific app they developed themselves.
If your flows could benefit from being extended in this way, read “Configuring Responder Flows” to learn more.
Target/minimum versions¶
Applications can now specify a target version in their JAR manifest. The target version declares which version of the platform the app was tested against. By incrementing the target version, app developers can opt in to desirable changes that might otherwise not be entirely backwards compatible. For example in a future release when the deterministic JVM is integrated and enabled, apps will need to opt in to determinism by setting the target version to a high enough value.
Target versioning has a proven track record in both iOS and Android of enabling platforms to preserve strong backwards compatibility, whilst also moving forward with new features and bug fixes. We recommend that maintained applications always try and target the latest version of the platform. Setting a target version does not imply your app requires a node of that version, merely that it’s been tested against that version and can handle any opt-in changes.
Applications may also specify a minimum platform version. If you try to install an app in a node that is too old to satisfy this requirement, the app won’t be loaded. App developers can set their min platform version requirement if they start using new features and APIs.
Dependency upgrades¶
We’ve raised the minimum JDK to 8u171, needed to get fixes for certain ZIP compression bugs.
We’ve upgraded to Kotlin 1.2.71 so your apps can now benefit from the new features in this language release.
We’ve upgraded to Gradle 4.10.1.
Changes for administrators in Corda 4¶
Official Docker images¶
Corda 4 adds an Official Corda Docker Image for starting the node. It’s based on Ubuntu and uses the Azul Zulu spin of Java 8. Other tools will have Docker images in future as well.
Auto-acceptance for network parameters updates¶
Changes to the parameters of a compatibility zone require all nodes to opt in before a flag day.
Some changes are trivial and very unlikely to trigger any disagreement. We have added auto-acceptance for a subset of network parameters, negating the need for a node operator to manually run an accept command on every parameter update. This behaviour can be turned off via the node configuration. See 网络地图.
Automatic error codes¶
Errors generated in Corda are now hashed to produce a unique error code that can be used to perform a lookup into a knowledge base. The lookup URL will be printed to the logs when an error occur. Here’s an example:
[ERROR] 2018-12-19T17:18:39,199Z [main] internal.NodeStartupLogging.invoke - Exception during node startup: The name 'O=Wawrzek Test C4, L=London, C=GB' for identity doesn't match what's in the key store: O=Wawrzek Test C4, L=Ely, C=GB [errorCode=wuxa6f, moreInformationAt=https://errors.corda.net/OS/4.0/wuxa6f]
The hope is that common error conditions can quickly be resolved and opaque errors explained in a more user friendly format to facilitate faster debugging and trouble shooting.
At the moment, Stack Overflow is that knowledge base, with the error codes being converted to a URL that redirects either directly to the answer or to an appropriate search on Stack Overflow.
Standardisation of command line argument handling¶
In Corda 4 we have ported the node and all our tools to use a new command line handling framework. Advantages for you:
- Improved, coloured help output.
- Common options have been standardised to use the same name and behaviour everywhere.
- All programs can now generate bash/zsh auto completion files.
You can learn more by reading our CLI user experience guidelines document.
Liquibase for database schema upgrades¶
We have open sourced the Liquibase schema upgrade feature from Corda Enterprise. The node now uses Liquibase to bootstrap and update itself automatically. This is a transparent change with pre Corda 4 nodes seamlessly upgrading to operate as if they’d been bootstrapped in this way. This also applies to the finance CorDapp module.
重要
If you’re upgrading a node from Corda 3 to Corda 4 and there is old data in the vault, this upgrade may take some time, depending on the number of unconsumed states in the vault.
Ability to pre-validate configuration files¶
A new command has been added that lets you verify a config file is valid without starting up the rest of the node:
java -jar corda-4.0.jar validate-configuration
Flow control for notaries¶
Notary clusters can now exert backpressure on clients, to stop them from being overloaded. Nodes will be ordered to back off if a notary is getting too busy, and app flows will pause to give time for the load spike to pass. This change is transparent to both developers and administrators.
Retirement of non-elliptic Diffie-Hellman for TLS¶
The TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 family of ciphers is retired from the list of allowed ciphers for TLS as it is a legacy cipher family not supported by all native SSL/TLS implementations. We anticipate that this will have no impact on any deployed configurations.
Miscellaneous changes¶
To learn more about smaller changes, please read the Changelog.
Finally, we have added some new jokes. Thank you and good night!
Upgrading apps to Corda 4¶
These notes provide instructions for upgrading your CorDapps from previous versions. Corda provides backwards compatibility for public, non-experimental APIs that have been committed to. A list can be found in the Corda API page.
This means that you can upgrade your node across versions without recompiling or adjusting your CorDapps. You just have to upgrade your node and restart.
However, there are usually new features and other opt-in changes that may improve the security, performance or usability of your application that are worth considering for any actively maintained software. This guide shows you how to upgrade your app to benefit from the new features in the latest release.
警告
The sample apps found in the Corda repository and the Corda samples repository are not intended to be used in production. If you are using them you should re-namespace them to a package namespace you control, and sign/version them yourself.
目录
- Upgrading apps to Corda 4
- Step 1. Switch any RPC clients to use the new RPC library
- Step 2. Adjust the version numbers in your Gradle build files
- Step 3. Update your Gradle build file
- Step 4. Remove any custom configuration from the node.conf
- Step 5. Security: Upgrade your use of FinalityFlow
- Step 6. Security: Upgrade your use of SwapIdentitiesFlow
- Step 7. Possibly, adjust test code
- Step 8. Security: Add BelongsToContract annotations
- Step 9. Learn about signature constraints and JAR signing
- Step 10. Security: Package namespace handling
- Step 11. Consider adding extension points to your flows
- Step 12. Possibly update vault state queries
- Step 13. Explore other new features that may be useful
- Step 14. Possibly update your checked in quasar.jar
Step 1. Switch any RPC clients to use the new RPC library¶
Although the RPC API is backwards compatible with Corda 3, the RPC wire protocol isn’t. Therefore RPC clients like web servers need to be updated in lockstep with the node to use the new version of the RPC library. Corda 4 delivers RPC wire stability and therefore in future you will be able to update the node and apps without updating RPC clients.
Step 2. Adjust the version numbers in your Gradle build files¶
Alter the versions you depend on in your Gradle file like so:
ext.corda_release_version = '4.1-RC01'
ext.corda_gradle_plugins_version = '4.0.42'
ext.kotlin_version = '1.2.71'
ext.quasar_version = '0.7.10'
注解
You may wish to update your kotlinOptions to use language level 1.2, to benefit from the new features. Apps targeting Corda 4 may not at this time use Kotlin 1.3, as it was released too late in the development cycle for us to risk an upgrade. Sorry! Future work on app isolation will make it easier for apps to use newer Kotlin versions than the node itself uses.
You should also ensure you’re using Gradle 4.10 (but not 5). If you use the Gradle wrapper, run:
./gradlew wrapper --gradle-version 4.10.3
Otherwise just upgrade your installed copy in the usual manner for your operating system.
Step 3. Update your Gradle build file¶
There are several adjustments that are beneficial to make to your Gradle build file, beyond simply incrementing the versions as described in step 1.
Provide app metadata. This is used by the Corda Gradle build plugin to populate your app JAR with useful information. It should look like this:
cordapp {
targetPlatformVersion 4
minimumPlatformVersion 4
contract {
name "MegaApp Contracts"
vendor "MegaCorp"
licence "A liberal, open source licence"
versionId 1
}
workflow {
name "MegaApp flows"
vendor "MegaCorp"
licence "A really expensive proprietary licence"
versionId 1
}
}
重要
Watch out for the UK spelling of the word licence (with a c).
Name, vendor and licence can be set to any string you like, they don’t have to be Corda identities.
Target versioning is a new concept introduced in Corda 4. Learn more by reading 版本. Setting a target version of 4 opts in to changes that might not be 100% backwards compatible, such as API semantics changes or disabling workarounds for bugs that may be in your apps, so by doing this you are promising that you have thoroughly tested your app on the new version. Using a high target version is a good idea because some features and improvements are only available to apps that opt in.
The minimum platform version is the platform version of the node that you require, so if you start using new APIs and features in Corda 4, you should set this to 4. Unfortunately Corda 3 and below do not know about this metadata and don’t check it, so your app will still be loaded in such nodes and may exhibit undefined behaviour at runtime. However it’s good to get in the habit of setting this properly for future releases.
注解
Whilst it’s currently a convention that Corda releases have the platform version number as their major version i.e. Corda 3.3 implements platform version 3, this is not actually required and may in future not hold true. You should know the platform version of the node releases you want to target.
The new versionId
number is a version code for your app, and is unrelated to Corda’s own versions.
It is used to informative purposes only. See “带有签名约束的应用版本” for more information.
Split your app into contract and workflow JARs. The duplication between contract
and workflow
blocks exists because you should split your app into
two separate JARs/modules, one that contains on-ledger validation code like states and contracts, and one
for the rest (called by convention the “workflows” module although it can contain a lot more than just flows:
services would also go here, for instance). For simplicity, here we use one JAR for both, but this is in
general an anti-pattern and can result in your flow logic code being sent over the network to arbitrary
third party peers, even though they don’t need it.
In future, the version ID attached to the workflow JAR will also be used to help implement smoother upgrade and migration features. You may directly reference the gradle version number of your app when setting the CorDapp specific versionId identifiers if this follows the convention of always being a whole number starting from 1.
If you use the finance demo app, you should adjust your dependencies so you depend on the finance-contracts and finance-workflows artifacts from your own contract and workflow JAR respectively.
Step 4. Remove any custom configuration from the node.conf¶
CorDapps can no longer access custom configuration items in the node.conf
file. Any custom CorDapp configuration should be added to a
CorDapp configuration file. The Node’s configuration will not be accessible. CorDapp configuration files should be placed in the
config subdirectory of the Node’s cordapps folder. The name of the file should match the name of the JAR of the CorDapp (eg; if your
CorDapp is called hello-0.1.jar
the configuration file needed would be cordapps/config/hello-0.1.conf
).
If you are using the extraConfig
of a node
in the deployNodes
Gradle task to populate custom configuration for testing, you will need
to make the following change so that:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
node {
name "O=Bank A,L=London,C=GB"c
...
extraConfig = [ 'some.extra.config' : '12345' ]
}
}
Would become:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
node {
name "O=Bank A,L=London,C=GB"c
...
projectCordapp {
config "some.extra.config=12345"
}
}
}
See CorDapp configuration files for more information.
Step 5. Security: Upgrade your use of FinalityFlow¶
The previous FinalityFlow
API is insecure. It doesn’t have a receive flow, so requires counterparty nodes to accept any and
all signed transactions that are sent to it, without checks. It is highly recommended that existing CorDapps migrate
away to the new API, as otherwise things like business network membership checks won’t be reliably enforced.
The flows that make use of FinalityFlow
in a CorDapp can be classified in the following 2 basic categories:
- non-initiating flows: these are flows that finalise a transaction without the involvement of a counterpart flow at all.
- initiating flows: these are flows that initiate a counterpart (responder) flow.
There is a main difference between these 2 different categories, which is relevant to how the CorDapp can be upgraded.
The second category of flows can be upgraded to use the new FinalityFlow
in a backwards compatible way, which means the upgraded CorDapp can be deployed at the various nodes using a rolling deployment.
On the other hand, the first category of flows cannot be upgraded to the new FinalityFlow
in a backwards compatible way, so the changes to these flows need to be deployed simultaneously at all the nodes, using a lockstep deployment.
注解
A lockstep deployment is one, where all the involved nodes are stopped, upgraded to the new version of the CorDapp and then re-started. As a result, there can’t be any nodes running different versions of the CorDapp at any time. A rolling deployment is one, where every node can be stopped, upgraded to the new version of the CorDapp and re-started independently and on its own pace. As a result, there can be nodes running different versions of the CorDapp and transact with each other successfully.
The upgrade is a three step process:
- Change the flow that calls
FinalityFlow
. - Change or create the flow that will receive the finalised transaction.
- Make sure your application’s minimum and target version numbers are both set to 4 (see Step 2. Adjust the version numbers in your Gradle build files).
Upgrading a non-initiating flow¶
As an example, let’s take a very simple flow that finalises a transaction without the involvement of a counterpart flow:
class SimpleFlowUsingOldApi(private val counterparty: Party) : FlowLogic<SignedTransaction>() {
@Suspendable
override fun call(): SignedTransaction {
val stx = dummyTransactionWithParticipant(counterparty)
return subFlow(FinalityFlow(stx))
}
}
public static class SimpleFlowUsingOldApi extends FlowLogic<SignedTransaction> {
private final Party counterparty;
@Suspendable
@Override
public SignedTransaction call() throws FlowException {
SignedTransaction stx = dummyTransactionWithParticipant(counterparty);
return subFlow(new FinalityFlow(stx));
}
To use the new API, this flow needs to be annotated with InitiatingFlow
and a FlowSession
to the participant(s) of the transaction must be
passed to FinalityFlow
:
// Notice how the flow *must* now be an initiating flow even when it wasn't before.
@InitiatingFlow
class SimpleFlowUsingNewApi(private val counterparty: Party) : FlowLogic<SignedTransaction>() {
@Suspendable
override fun call(): SignedTransaction {
val stx = dummyTransactionWithParticipant(counterparty)
// For each non-local participant in the transaction we must initiate a flow session with them.
val session = initiateFlow(counterparty)
return subFlow(FinalityFlow(stx, session))
}
}
// Notice how the flow *must* now be an initiating flow even when it wasn't before.
@InitiatingFlow
public static class SimpleFlowUsingNewApi extends FlowLogic<SignedTransaction> {
private final Party counterparty;
@Suspendable
@Override
public SignedTransaction call() throws FlowException {
SignedTransaction stx = dummyTransactionWithParticipant(counterparty);
// For each non-local participant in the transaction we must initiate a flow session with them.
FlowSession session = initiateFlow(counterparty);
return subFlow(new FinalityFlow(stx, session));
}
If there are more than one transaction participants then a session to each one must be initiated, excluding the local party and the notary.
A responder flow has to be introduced, which will automatically run on the other participants’ nodes, which will call ReceiveFinalityFlow
to record the finalised transaction:
// All participants will run this flow to receive and record the finalised transaction into their vault.
@InitiatedBy(SimpleFlowUsingNewApi::class)
class SimpleNewResponderFlow(private val otherSide: FlowSession) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
subFlow(ReceiveFinalityFlow(otherSide))
}
}
// All participants will run this flow to receive and record the finalised transaction into their vault.
@InitiatedBy(SimpleFlowUsingNewApi.class)
public static class SimpleNewResponderFlow extends FlowLogic<Void> {
private final FlowSession otherSide;
@Suspendable
@Override
public Void call() throws FlowException {
subFlow(new ReceiveFinalityFlow(otherSide));
return null;
}
注解
As described above, all the nodes in your business network will need the new CorDapp, otherwise they won’t know how to receive the transaction. This includes nodes which previously didn’t have the old CorDapp. If a node is sent a transaction and it doesn’t have the new CorDapp loaded then simply restart it with the CorDapp and the transaction will be recorded.
Upgrading an initiating flow¶
For flows which are already initiating counterpart flows then it’s a matter of using the existing flow session.
Note however, the new FinalityFlow
is inlined and so the sequence of sends and receives between the two flows will
change and will be incompatible with your current flows. You can use the flow version API to write your flows in a
backwards compatible manner.
Here’s what an upgraded initiating flow may look like:
// Assuming the previous version of the flow was 1 (the default if none is specified), we increment the version number to 2
// to allow for backwards compatibility with nodes running the old CorDapp.
@InitiatingFlow(version = 2)
class ExistingInitiatingFlow(private val counterparty: Party) : FlowLogic<SignedTransaction>() {
@Suspendable
override fun call(): SignedTransaction {
val partiallySignedTx = dummyTransactionWithParticipant(counterparty)
val session = initiateFlow(counterparty)
val fullySignedTx = subFlow(CollectSignaturesFlow(partiallySignedTx, listOf(session)))
// Determine which version of the flow that other side is using.
return if (session.getCounterpartyFlowInfo().flowVersion == 1) {
// Use the old API if the other side is using the previous version of the flow.
subFlow(FinalityFlow(fullySignedTx))
} else {
// Otherwise they're at least on version 2 and so we can send the finalised transaction on the existing session.
subFlow(FinalityFlow(fullySignedTx, session))
}
}
}
// Assuming the previous version of the flow was 1 (the default if none is specified), we increment the version number to 2
// to allow for backwards compatibility with nodes running the old CorDapp.
@InitiatingFlow(version = 2)
public static class ExistingInitiatingFlow extends FlowLogic<SignedTransaction> {
private final Party counterparty;
@Suspendable
@Override
public SignedTransaction call() throws FlowException {
SignedTransaction partiallySignedTx = dummyTransactionWithParticipant(counterparty);
FlowSession session = initiateFlow(counterparty);
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(partiallySignedTx, singletonList(session)));
// Determine which version of the flow that other side is using.
if (session.getCounterpartyFlowInfo().getFlowVersion() == 1) {
// Use the old API if the other side is using the previous version of the flow.
return subFlow(new FinalityFlow(fullySignedTx));
} else {
// Otherwise they're at least on version 2 and so we can send the finalised transaction on the existing session.
return subFlow(new FinalityFlow(fullySignedTx, session));
}
}
For the responder flow, insert a call to ReceiveFinalityFlow
at the location where it’s expecting to receive the
finalised transaction. If the initiator is written in a backwards compatible way then so must the responder.
// First we have to run the SignTransactionFlow, which will return a SignedTransaction.
val txWeJustSigned = subFlow(object : SignTransactionFlow(otherSide) {
@Suspendable
override fun checkTransaction(stx: SignedTransaction) {
// Implement responder flow transaction checks here
}
})
if (otherSide.getCounterpartyFlowInfo().flowVersion >= 2) {
// The other side is not using the old CorDapp so call ReceiveFinalityFlow to record the finalised transaction.
// If SignTransactionFlow is used then we can verify the tranaction we receive for recording is the same one
// that was just signed.
subFlow(ReceiveFinalityFlow(otherSide, expectedTxId = txWeJustSigned.id))
} else {
// Otherwise the other side is running the old CorDapp and so we don't need to do anything further. The node
// will automatically record the finalised transaction using the old insecure mechanism.
}
// First we have to run the SignTransactionFlow, which will return a SignedTransaction.
SignedTransaction txWeJustSigned = subFlow(new SignTransactionFlow(otherSide) {
@Suspendable
@Override
protected void checkTransaction(@NotNull SignedTransaction stx) throws FlowException {
// Implement responder flow transaction checks here
}
});
if (otherSide.getCounterpartyFlowInfo().getFlowVersion() >= 2) {
// The other side is not using the old CorDapp so call ReceiveFinalityFlow to record the finalised transaction.
// If SignTransactionFlow is used then we can verify the tranaction we receive for recording is the same one
// that was just signed by passing the transaction id to ReceiveFinalityFlow.
subFlow(new ReceiveFinalityFlow(otherSide, txWeJustSigned.getId()));
} else {
// Otherwise the other side is running the old CorDapp and so we don't need to do anything further. The node
// will automatically record the finalised transaction using the old insecure mechanism.
}
You may already be using waitForLedgerCommit
in your responder flow for the finalised transaction to appear in the local node’s vault.
Now that it’s calling ReceiveFinalityFlow
, which effectively does the same thing, this is no longer necessary. The call to
waitForLedgerCommit
should be removed.
Step 6. Security: Upgrade your use of SwapIdentitiesFlow¶
The Confidential identities API is experimental in Corda 3 and remains so in Corda 4. In this release, the SwapIdentitiesFlow
has been adjusted in the same way as FinalityFlow
above, to close problems with confidential identities being injectable into a node
outside of other flow context. Old code will still work, but it is recommended to adjust your call sites so a session is passed into
the SwapIdentitiesFlow
.
Step 7. Possibly, adjust test code¶
MockNodeParameters
and functions creating it no longer use a lambda expecting a NodeConfiguration
object.
Use a MockNetworkConfigOverrides
object instead. This is an API change we regret, but unfortunately in Corda 3 we accidentally exposed
large amounts of the node internal code through this one API entry point. We have now insulated the test API from node internals and
reduced the exposure.
If you are constructing a MockServices for testing contracts, and your contract uses the Cash contract from the finance app, you
now need to explicitly add net.corda.finance.contracts
to the list of cordappPackages
. This is a part of the work to disentangle
the finance app (which is really a demo app) from the Corda internals. Example:
val ledgerServices = MockServices(
listOf("net.corda.examples.obligation", "net.corda.testing.contracts"),
identityService = makeTestIdentityService(),
initialIdentity = TestIdentity(CordaX500Name("TestIdentity", "", "GB"))
)
becomes:
val ledgerServices = MockServices(
listOf("net.corda.examples.obligation", "net.corda.testing.contracts", "net.corda.finance.contracts"),
identityService = makeTestIdentityService(),
initialIdentity = TestIdentity(CordaX500Name("TestIdentity", "", "GB"))
)
You may need to use the new TestCordapp
API when testing with the node driver or mock network, especially if you decide to stick with the
pre-Corda 4 FinalityFlow
API. The previous way of pulling in CorDapps into your tests (i.e. via using the cordappPackages
parameter) does not honour CorDapp versioning.
The new API TestCordapp.findCordapp()
discovers the CorDapps that contain the provided packages scanning the classpath, so you have to ensure that the classpath the tests are running under contains either the CorDapp .jar
or (if using Gradle) the relevant Gradle sub-project.
In the first case, the versioning information in the CorDapp .jar
file will be maintained. In the second case, the versioning information will be retrieved from the Gradle cordapp
task.
For example, if you are using MockNetwork
for your tests, the following code:
val mockNetwork = MockNetwork(
cordappPackages = listOf("net.corda.examples.obligation", "net.corda.finance.contracts"),
notarySpecs = listOf(MockNetworkNotarySpec(notary))
)
would need to be transformed into:
val mockNetwork = MockNetwork(MockNetworkParameters(
cordappsForAllNodes = listOf(TestCordapp.findCordapp("net.corda.businessnetworks.membership")),
notarySpecs = listOf(MockNetworkNotarySpec(notary))
))
Note that every package should exist in only one CorDapp, otherwise the discovery process won’t be able to determine which one to use and you will most probably see an exception telling you There is more than one CorDapp containing the package
.
For instance, if you have 2 CorDapps containing the packages net.corda.examples.obligation.contracts
and net.corda.examples.obligation.flows
, you will get this error if you specify the package net.corda.examples.obligation
.
注解
If you have any CorDapp code (e.g. flows/contracts/states) that is only used by the tests and located in the same test module, it won’t be discovered now. You will need to move them in the main module of one of your CorDapps or create a new, separate CorDapp for them, in case you don’t want this code to live inside your production CorDapps.
Step 8. Security: Add BelongsToContract annotations¶
In versions of the platform prior to v4, it was the responsibility of contract and flow logic to ensure that TransactionState
objects
contained the correct class name of the expected contract class. If these checks were omitted, it would be possible for a malicious counterparty
to construct a transaction containing e.g. a cash state governed by a commercial paper contract. The contract would see that there were no
commercial paper states in a transaction and do nothing, i.e. accept.
In Corda 4 the platform takes over this responsibility from the app, if the app has a target version of 4 or higher. A state is expected to be governed by a contract that is either:
- The outer class of the state class, if the state is an inner class of a contract. This is a common design pattern.
- Annotated with
@BelongsToContract
which specifies the contract class explicitly.
Learn more by reading “Contract/State Agreement”. If an app targets Corda 3 or lower (i.e. does not specify a target version), states that point to contracts outside their package will trigger a log warning but validation will proceed.
Step 9. Learn about signature constraints and JAR signing¶
<no title> are a new data model feature introduced in Corda 4. They make it much easier to deploy application upgrades smoothly and in a decentralised manner. Signature constraints are the new default mode for CorDapps, and the act of upgrading your app to use the version 4 Gradle plugins will result in your app being automatically signed, and new states automatically using new signature constraints selected automatically based on these signing keys.
You can read more about signature constraints and what they do in API: 合约约束. The TransactionBuilder
class will
automatically use them if your application JAR is signed. We recommend all JARs are signed. To learn how to sign your JAR files, read
Signing the CorDapp JAR. In dev mode, all JARs are signed by developer certificates. If a JAR that was signed
with developer certificates is deployed to a production node, the node will refuse to start. Therefore to deploy apps built for Corda 4
to production you will need to generate signing keys and integrate them with the build process.
注解
Please read the CorDapp constraints migration guide to understand how to upgrade CorDapps to use Corda 4 signature constraints and consume existing states on ledger issued with older constraint types (e.g. Corda 3.x states issued with hash or CZ whitelisted constraints).
Step 10. Security: Package namespace handling¶
Almost no apps will be affected by these changes, but they’re important to know about.
There are two improvements to how Java package protection is handled in Corda 4:
- Package sealing
- Package namespace ownership
Sealing. App isolation has been improved. Version 4 of the finance CorDapps (corda-finance-contracts.jar, corda-finance-workflows.jar) is now built as a set of sealed and
signed JAR files. This means classes in your own CorDapps cannot be placed under the following package namespace: net.corda.finance
In the unlikely event that you were injecting code into net.corda.finance.*
package namespaces from your own apps, you will need to move them
into a new package, e.g. net/corda/finance/flows/MyClass.java
can be moved to com/company/corda/finance/flows/MyClass.java
.
As a consequence your classes are no longer able to access non-public members of finance CorDapp classes.
When signing your JARs for Corda 4, your own apps will also become sealed, meaning other JARs cannot place classes into your own packages. This is a security upgrade that ensures package-private visibility in Java code works correctly. If other apps could define classes in your own packages, they could call package-private methods, which may not be expected by the developers.
Namespace ownership. This part is only relevant if you are joining a production compatibility zone. You may wish to contact your zone operator
and request ownership of your root package namespaces (e.g. com.megacorp.*
), with the signing keys you will be using to sign your app JARs.
The zone operator can then add your signing key to the network parameters, and prevent attackers defining types in your own package namespaces.
Whilst this feature is optional and not strictly required, it may be helpful to block attacks at the boundaries of a Corda based application
where type names may be taken “as read”. You can learn more about this feature and the motivation for it by reading
“)”.
Step 11. Consider adding extension points to your flows¶
In Corda 4 it is possible for flows in one app to subclass and take over flows from another. This allows you to create generic, shared flow logic that individual users can customise at pre-agreed points (protected methods). For example, a site-specific app could be developed that causes transaction details to be converted to a PDF and sent to a particular printer. This would be an inappropriate feature to put into shared business logic, but it makes perfect sense to put into a user-specific app they developed themselves.
If your flows could benefit from being extended in this way, read “Configuring Responder Flows” to learn more.
Step 12. Possibly update vault state queries¶
In Corda 4 queries made on a node’s vault can filter by the relevancy of those states to the node. As this functionality does not exist in Corda 3, apps will continue to receive all states in any vault queries. However, it may make sense to migrate queries expecting just those states relevant to the node in question to query for only relevant states. See API: Vault Query for more details on how to do this. Not doing this may result in queries returning more states than expected if the node is using observer functionality (see “Observer nodes”).
Step 13. Explore other new features that may be useful¶
Corda 4 adds several new APIs that help you build applications. Why not explore:
- The new withEntityManager API for using JPA inside your flows and services.
- 引用 States, that let you use an input state without consuming it.
- State 指针, that make it easier to ‘point’ to one state from another and follow the latest version of a linear state.
Please also read the CorDapp Upgradeability Guarantees associated with CorDapp upgrading.
Step 14. Possibly update your checked in quasar.jar¶
If your project is based on one of the official cordapp templates, it is likely you have a lib/quasar.jar
checked in. It is worth noting
that you only use this if you use the JUnit runner in IntelliJ. In the latest release of the cordapp templates, this directory has
been removed.
You have some choices here:
- Upgrade your
quasar.jar
to0.7.10
- Delete your
lib
directory and switch to using the Gradle test runner
Instructions for both options can be found in Running tests in Intellij.
Upgrading your node to Corda 4¶
Corda releases strive to be backwards compatible, so upgrading a node is fairly straightforward and should not require changes to applications. It consists of the following steps:
- Drain the node.
- Make a backup of your node directories and/or database.
- Replace the
corda.jar
file with the new version. - Start up the node. This step may incur a delay whilst any needed database migrations are applied.
- Undrain it to re-enable processing of new inbound flows.
The protocol is designed to tolerate node outages, so during the upgrade process peers on the network will wait for your node to come back.
Step 1. Drain the node¶
Before a node or application on it can be upgraded, the node must be put in 排空节点模式. This brings the currently running Flows to a smooth halt such that existing work is finished and new work is queuing up rather than being processed.
Draining flows is a key task for node administrators to perform. It exists to simplify applications by ensuring apps don’t have to be able to migrate workflows from any arbitrary point to other arbitrary points, a task that would rapidly become infeasible as workflow and protocol complexity increases.
To drain the node, run the gracefulShutdown
command. This will wait for the node to drain and then shut down the node when the drain
is complete.
警告
The length of time a node takes to drain depends on both how your applications are designed, and whether any apps are currently talking to network peers that are offline or slow to respond. It is thus hard to give guidance on how long a drain should take, but in an environment with well written apps and in which your counterparties are online, drains may need only a few seconds.
Step 2. Make a backup of your node directories and/or database¶
It’s always a good idea to make a backup of your data before upgrading any server. This will make it easy to roll back if there’s a problem. You can simply make a copy of the node’s data directory to enable this. If you use an external non-H2 database please consult your database user guide to learn how to make backups.
We provide some backup recommendations if you’d like more detail.
Step 3. Replace corda.jar
with the new version¶
Download the latest version of Corda from our Artifactory site. Make sure it’s available on your path, and that you’ve read the Release notes for Corda 4, in particular to discover what version of Java this node requires.
重要
Corda 4 requires Java 8u171 or any higher Java 8 patchlevel. Java 9+ is not currently supported.
Step 4. Start up the node¶
Start the node in the usual manner you have selected. The node will perform any automatic data migrations required, which may take some time. If the migration process is interrupted it can be continued simply by starting the node again, without harm.
Step 5. Undrain the node¶
You may now do any checks that you wish to perform, read the logs, and so on. When you are ready, use this command at the shell:
run setFlowsDrainingModeEnabled enabled: false
Your upgrade is complete.
Corda API¶
The following are the core APIs that are used in the development of CorDapps:
下边是开发 CorDapps 使用的核心 APIs:
API: States¶
注解
Before reading this page, you should be familiar with the key concepts of States.
注解
在阅读这篇文档之前,你应该已经熟悉了核心概念 States。
目录
ContractState¶
In Corda, states are instances of classes that implement ContractState
. The ContractState
interface is defined
as follows:
在 Corda 中,states 是那些实现了 ContractState
的类实例。ContractState
接口定义如下:
/**
* A contract state (or just "state") contains opaque data used by a contract program. It can be thought of as a disk
* file that the program can use to persist data across transactions. States are immutable: once created they are never
* updated, instead, any changes must generate a new successor state. States can be updated (consumed) only once: the
* notary is responsible for ensuring there is no "double spending" by only signing a transaction if the input states
* are all free.
*/
@KeepForDJVM
@CordaSerializable
interface ContractState {
/**
* A _participant_ is any party that should be notified when the state is created or consumed.
*
* The list of participants is required for certain types of transactions. For example, when changing the notary
* for this state, every participant has to be involved and approve the transaction
* so that they receive the updated state, and don't end up in a situation where they can no longer use a state
* they possess, since someone consumed that state during the notary change process.
*
* The participants list should normally be derived from the contents of the state.
*/
val participants: List<AbstractParty>
}
ContractState
has a single field, participants
. participants
is a List
of the AbstractParty
that
are considered to have a stake in the state. Among other things, the participants
will:
- Usually store the state in their vault (see below)
- Need to sign any notary-change and contract-upgrade transactions involving this state
- Receive any finalised transactions involving this state as part of
FinalityFlow
/ReceiveFinalityFlow
ContractState
只有一个字段 participants
。participants
是一个 AbstractParty
的 List
,代表了同这个 state 有关的节点。participants
将会:
- 通常会将 state 存储到他们的 vault 中
- 需要为任何涉及到该 state 的 notary 变更和合约升级的交易提供签名
- 作为
FinalityFlow
/ReceiveFinalityFlow
的一部分,接收任何涉及到该 state 的最终交易信息
ContractState 子接口¶
The behaviour of the state can be further customised by implementing sub-interfaces of ContractState
. The two most
common sub-interfaces are:
LinearState
OwnableState
state 的行为可以通过实现 ContractState
的子接口被进一步的定制。最常用的两个子接口包括:
LinearState
OwnableState
LinearState
models shared facts for which there is only one current version at any point in time. LinearState
states evolve in a straight line by superseding themselves. On the other hand, OwnableState
is meant to represent
assets that can be freely split and merged over time. Cash is a good example of an OwnableState
- two existing $5
cash states can be combined into a single $10 cash state, or split into five $1 cash states. With OwnableState
, its
the total amount held that is important, rather than the actual units held.
LinearState
代表了一个在任何时间都是只有一个当前版本的共享的事实。LinearState
states 通过替换自己的方式实现一个线性的改变。而 OwnableState
则代表在任何时候都可以被自由的拆分或者合并的资产。现金就是一个 OwnableState
的很好的例子 - 两个已经存在 $5 现金 state 可以合并为一个单独的 $10 的现金 state,或者被拆分成 5 个 $1 的现金 state。对于 OwnableState
,它的总金额是更重要的,而不是到底有多少份。
We can picture the hierarchy as follows:
我们可以通过下图来表述这个结构:

LinearState¶
The LinearState
interface is defined as follows:
LinearState
接口定义如下:
/**
* A state that evolves by superseding itself, all of which share the common "linearId".
*
* This simplifies the job of tracking the current version of certain types of state in e.g. a vault.
*/
@KeepForDJVM
interface LinearState : ContractState {
/**
* Unique id shared by all LinearState states throughout history within the vaults of all parties.
* Verify methods should check that one input and one output share the id in a transaction,
* except at issuance/termination.
*/
val linearId: UniqueIdentifier
}
Remember that in Corda, states are immutable and can’t be updated directly. Instead, we represent an evolving fact as a
sequence of LinearState
states that share the same linearId
and represent an audit trail for the lifecycle of
the fact over time.
记住在 Corda 中,states 是不可变的,并且不能直接的更改的。然而,我们可以使用有序的 LinearState
states 来表现一个事实,这些 states 共同分享一个 linearId
,并且他们能够代表一个事实的整个生命周期。
When we want to extend a LinearState
chain (i.e. a sequence of states sharing a linearId
), we:
- Use the
linearId
to extract the latest state in the chain from the vault - Create a new state that has the same
linearId
- Create a transaction with:
- The current latest state in the chain as an input
- The newly-created state as an output
当我们想要扩展一个 LinearState
链(比如使用同一个 linearId
的 states 的序列)的时候,我们会:
- 使用
linearId
从账本中获取该 state 链中最新的 state - 创建一个具有相同
linearId
的新的 state - 创建一个包含下边元素的 transaction: * 将该 state 链中的当前版本的 state 作为 input * 将新创建的 state 作为 output
The new state will now become the latest state in the chain, representing the new current state of the agreement.
新创建的 state 现在就成为了这个 state 链的最新的 state,代表了协议的最新的当前 state。
linearId
is of type UniqueIdentifier
, which is a combination of:
- A Java
UUID
representing a globally unique 128 bit random number - An optional external-reference string for referencing the state in external systems
linearId
是一种 UniqueIdentifier
类型,由下边的元素组成:
- 一个 Java
UUID
,代表了一个全局唯一的 128 bit 的随机数 - 一个可选的外部引用(external-reference) 字符串,作为在外部系统中使用的引用
OwnableState¶
The OwnableState
interface is defined as follows:
OwnableState
接口定义如下:
/**
* Return structure for [OwnableState.withNewOwner]
*/
@KeepForDJVM
data class CommandAndState(val command: CommandData, val ownableState: OwnableState)
/**
* A contract state that can have a single owner.
*/
@KeepForDJVM
interface OwnableState : ContractState {
/** There must be a MoveCommand signed by this key to claim the amount. */
val owner: AbstractParty
/** Copies the underlying data structure, replacing the owner field with this new value and leaving the rest alone. */
fun withNewOwner(newOwner: AbstractParty): CommandAndState
}
Where:
owner
is thePublicKey
of the asset’s ownerwithNewOwner(newOwner: AbstractParty)
creates an copy of the state with a new owner
其中:
owner
是该资产的所有者的公钥PublicKey
withNewOwner(newOwner: AbstractParty)
创建了一个具有新的所有者的 state 的副本
Because OwnableState
models fungible assets that can be merged and split over time, OwnableState
instances do
not have a linearId
. $5 of cash created by one transaction is considered to be identical to $5 of cash produced by
another transaction.
由于 OwnableState 形成了一个可替换的资产(fungible assets)的模型,这种资产可以合并和拆分,OwnableState 实例没有 linearId。一笔交易产生的 $5 现金和另一笔其他的交易产生的 $5 现金会被看作是同样的 state。
FungibleState¶
FungibleState<T>
is an interface to represent things which are fungible, this means that there is an expectation that
these things can be split and merged. That’s the only assumption made by this interface. This interface should be
implemented if you want to represent fractional ownership in a thing, or if you have many things. Examples:
- There is only one Mona Lisa which you wish to issue 100 tokens, each representing a 1% interest in the Mona Lisa
- A company issues 1000 shares with a nominal value of 1, in one batch of 1000. This means the single batch of 1000 shares could be split up into 1000 units of 1 share.
FungibleState<T>
是代表了可替换的事物的一个接口,这意味着这些事物可以被拆分或者合并的例外。那是这个接口唯一的一个假设。如果你想要展示对于一件事物或者多个事物的部分所有权的话,那么久应该实现这个接口。比如:
- 因为只有一个蒙娜丽莎,你想要发行 100 个 tokens 为这个蒙娜丽莎,那么每个 token 就代表着蒙娜丽莎的 1%
- 一家公司发行了 1000 股股票,每股是 1,1次发行完。这意味着这样一次的 1000 股股票的发行可以被拆分为 1000 单位的 1 股股票。
The interface is defined as follows:
接口定义如下:
@KeepForDJVM
interface FungibleState<T : Any> : ContractState {
/**
* Amount represents a positive quantity of some token which can be cash, tokens, stock, agreements, or generally
* anything else that's quantifiable with integer quantities. See [Amount] for more details.
*/
val amount: Amount<T>
}
As seen, the interface takes a type parameter T
that represents the fungible thing in question. This should describe
the basic type of the asset e.g. GBP, USD, oil, shares in company <X>, etc. and any additional metadata (issuer, grade,
class, etc.). An upper-bound is not specified for T
to ensure flexibility. Typically, a class would be provided that
implements TokenizableAssetInfo so the thing can be easily added and subtracted using the Amount
class.
像代码中那样,这个接口使用了一个代表了涉及的可替换的事物的类型参数 T
。这个应该描述了这个资产的基本类型,比如 GBP、USD、石油、公司 <X> 的股票,等等。通常,一个类应该被提供来实现 TokenizableAssetInfo,所以这个事物可以被简单地使用 Amount
class 进行增加和减少。
This interface has been added in addition to FungibleAsset
to provide some additional flexibility which
FungibleAsset
lacks, in particular:
FungibleAsset
defines an amount property of typeAmount<Issued<T>>
, therefore there is an assumption that all fungible things are issued by a single well known party but this is not always the case.FungibleAsset
implementsOwnableState
, as such there is an assumption that all fungible things are ownable.
除了 FungibleAsset
外,这个接口被添加用来提供 FungibleAsset
缺少的额外的灵活性,特别是:
FungibleAsset
定义了类型为Amount<Issued<T>>
的一个数量属性,因此就有一个所有的可拆分的事物都是由单一一个已知的节点发行的这样一个假设,但是并不是所有情况都是这样的。FungibleAsset
实现了OwnableState
,所以就有了这样一个所有可拆分的事物都是可以被拥有的一个假设。
其他接口¶
You can also customize your state by implementing the following interfaces:
QueryableState
, which allows the state to be queried in the node’s database using custom attributes (see API: 持久化)SchedulableState
, which allows us to schedule future actions for the state (e.g. a coupon payment on a bond) (see Event scheduling)
你也可以通过实现下边的接口来定制你的 state:
QueryableState
,这可以让 state 能够在节点的数据库中通过使用自定义的属性来被查询(查看 API: 持久化)SchedulableState
,可以允许我们对 state 设置一个将来会发生的动作(比如使用优惠券购买债券)(查看 Event scheduling)
用户定义字段¶
Beyond implementing ContractState
or a sub-interface, a state is allowed to have any number of additional fields
and methods. For example, here is the relatively complex definition for a state representing cash:
除了实现 ContractState
或者子接口外,一个 state 还允许包含任意数量的额外字段和方法。比如下边的代码就定义了一个相对复杂的代表现金 cash 的一个 state:
/** A state representing a cash claim against some party. */
@BelongsToContract(Cash::class)
data class State(
override val amount: Amount<Issued<Currency>>,
/** There must be a MoveCommand signed by this key to claim the amount. */
override val owner: AbstractParty
) : FungibleAsset<Currency>, QueryableState {
constructor(deposit: PartyAndReference, amount: Amount<Currency>, owner: AbstractParty)
: this(Amount(amount.quantity, Issued(deposit, amount.token)), owner)
override val exitKeys = setOf(owner.owningKey, amount.token.issuer.party.owningKey)
override val participants = listOf(owner)
override fun withNewOwnerAndAmount(newAmount: Amount<Issued<Currency>>, newOwner: AbstractParty): FungibleAsset<Currency>
= copy(amount = amount.copy(newAmount.quantity), owner = newOwner)
override fun toString() = "${Emoji.bagOfCash}Cash($amount at ${amount.token.issuer} owned by $owner)"
override fun withNewOwner(newOwner: AbstractParty) = CommandAndState(Commands.Move(), copy(owner = newOwner))
infix fun ownedBy(owner: AbstractParty) = copy(owner = owner)
infix fun issuedBy(party: AbstractParty) = copy(amount = Amount(amount.quantity, amount.token.copy(issuer = amount.token.issuer.copy(party = party))))
infix fun issuedBy(deposit: PartyAndReference) = copy(amount = Amount(amount.quantity, amount.token.copy(issuer = deposit)))
infix fun withDeposit(deposit: PartyAndReference): Cash.State = copy(amount = amount.copy(token = amount.token.copy(issuer = deposit)))
/** Object Relational Mapping support. */
override fun generateMappedObject(schema: MappedSchema): PersistentState {
return when (schema) {
is CashSchemaV1 -> CashSchemaV1.PersistentCashState(
owner = this.owner,
pennies = this.amount.quantity,
currency = this.amount.token.product.currencyCode,
issuerPartyHash = this.amount.token.issuer.party.owningKey.toStringShort(),
issuerRef = this.amount.token.issuer.reference.bytes
)
/** Additional schema mappings would be added here (eg. CashSchemaV2, CashSchemaV3, ...) */
else -> throw IllegalArgumentException("Unrecognised schema $schema")
}
}
/** Object Relational Mapping support. */
override fun supportedSchemas(): Iterable<MappedSchema> = listOf(CashSchemaV1)
/** Additional used schemas would be added here (eg. CashSchemaV2, CashSchemaV3, ...) */
}
The vault¶
Whenever a node records a new transaction, it also decides whether it should store each of the transaction’s output states in its vault. The default vault implementation makes the decision based on the following rules:
- If the state is an
OwnableState
, the vault will store the state if the node is the state’sowner
- Otherwise, the vault will store the state if it is one of the
participants
当一个节点记录了一笔新的交易的时候,它还可以选择是否将交易的每一个 output state 存储到它的 vault 中。默认的 vault 实现让这个决定基于以下的规则:
States that are not considered relevant are not stored in the node’s vault. However, the node will still store the transactions that created the states in its transaction storage.
不相关的 states 是不会存储到节点的 vault 中的。但是节点还是会将创建该 state 的 transaction 存储到它的 transaction storage 中。
TransactionState¶
When a ContractState
is added to a TransactionBuilder
, it is wrapped in a TransactionState
:
当一个 ContractState
被添加到一个 TransactionBuilder
之后,它就被包装成了一个 TransactionState
:
typealias ContractClassName = String
/**
* A wrapper for [ContractState] containing additional platform-level state information and contract information.
* This is the definitive state that is stored on the ledger and used in transaction outputs.
*/
@CordaSerializable
data class TransactionState<out T : ContractState> @JvmOverloads constructor(
/** The custom contract state */
val data: T,
/**
* The contract class name that will verify this state that will be created via reflection.
* The attachment containing this class will be automatically added to the transaction at transaction creation
* time.
*
* Currently these are loaded from the classpath of the node which includes the cordapp directory - at some
* point these will also be loaded and run from the attachment store directly, allowing contracts to be
* sent across, and run, from the network from within a sandbox environment.
*/
// TODO: Implement the contract sandbox loading of the contract attachments
val contract: ContractClassName = requireNotNull(data.requiredContractClassName) {
//TODO: add link to docsite page, when there is one.
"""
Unable to infer Contract class name because state class ${data::class.java.name} is not annotated with
@BelongsToContract, and does not have an enclosing class which implements Contract. Either annotate ${data::class.java.name}
with @BelongsToContract, or supply an explicit contract parameter to TransactionState().
""".trimIndent().replace('\n', ' ')
},
/** Identity of the notary that ensures the state is not used as an input to a transaction more than once */
val notary: Party,
/**
* All contract states may be _encumbered_ by up to one other state.
*
* The encumbrance state, if present, forces additional controls over the encumbered state, since the platform checks
* that the encumbrance state is present as an input in the same transaction that consumes the encumbered state, and
* the contract code and rules of the encumbrance state will also be verified during the execution of the transaction.
* For example, a cash contract state could be encumbered with a time-lock contract state; the cash state is then only
* processable in a transaction that verifies that the time specified in the encumbrance time-lock has passed.
*
* The encumbered state refers to another by index, and the referred encumbrance state
* is an output state in a particular position on the same transaction that created the encumbered state. An alternative
* implementation would be encumbering by reference to a [StateRef], which would allow the specification of encumbrance
* by a state created in a prior transaction.
*
* Note that an encumbered state that is being consumed must have its encumbrance consumed in the same transaction,
* otherwise the transaction is not valid.
*/
val encumbrance: Int? = null,
/**
* A validator for the contract attachments on the transaction.
*/
val constraint: AttachmentConstraint = AutomaticPlaceholderConstraint) {
private companion object {
val logger = loggerFor<TransactionState<*>>()
}
init {
when {
data.requiredContractClassName == null -> logger.warn(
"""
State class ${data::class.java.name} is not annotated with @BelongsToContract,
and does not have an enclosing class which implements Contract. Annotate ${data::class.java.simpleName}
with @BelongsToContract(${contract.split("\\.\\$").last()}.class) to remove this warning.
""".trimIndent().replace('\n', ' ')
)
data.requiredContractClassName != contract -> logger.warn(
"""
State class ${data::class.java.name} belongs to contract ${data.requiredContractClassName},
but is bundled with contract $contract in TransactionState. Annotate ${data::class.java.simpleName}
with @BelongsToContract(${contract.split("\\.\\$").last()}.class) to remove this warning.
""".trimIndent().replace('\n', ' ')
)
}
}
}
Where:
data
is the state to be stored on-ledgercontract
is the contract governing evolutions of this statenotary
is the notary service for this stateencumbrance
points to another state that must also appear as an input to any transaction consuming this stateconstraint
is a constraint on which contract-code attachments can be used with this state
其中:
data
是将被存储到账本中的 statecontract
是一个控制 state 转变的合约notary
是这个 state 的 notary serviceencumbrance
指向了另一个 state,该 state 必须以一个 input 的形式出现在消费此 state 的交易中constraint
是一个该 state 使用的合约代码 contract-code 附件的约束
引用 States¶
A reference input state is a ContractState
which can be referred to in a transaction by the contracts of input and
output states but whose contract is not executed as part of the transaction verification process. Furthermore,
reference states are not consumed when the transaction is committed to the ledger but they are checked for
“current-ness”. In other words, the contract logic isn’t run for the referencing transaction only. It’s still a normal
state when it occurs in an input or output position.
一个引用类型 input state 是一个 ContractState
,它可以在一个交易中心在 input 和 output states 的合约代码所引用,但是它所对应的合约代码是不会作为交易的验证过程的一部分被执行的。而且,引用类型的 states 在交易被提交到账本的时候是不会被消费掉的,但是他们会被验证 “是否是最新的”。换句话说,合约的逻辑是不会针对引用的交易执行的。当把它放在一个 input 或者 output 的位置的时候,它依旧是一个常规的 state。
Reference data states enable many parties to reuse the same state in their transactions as reference data whilst still allowing the reference data state owner the capability to update the state. A standard example would be the creation of financial instrument reference data and the use of such reference data by parties holding the related financial instruments.
引用类型的数据 states 能够让多方在他们的交易中重用相同的 state 作为引用的数据,然而还能依旧允许这个引用数据 state 的所有者有能力来更新这个 state。一个典型的例子是金融票据引用数据以及由持有这些相关的金融票据的相关方对这些引用数据的使用。
Just like regular input states, the chain of provenance for reference states is resolved and all dependency transactions verified. This is because users of reference data must be satisfied that the data they are referring to is valid as per the rules of the contract which governs it and that all previous participants of the state assented to updates of it.
像常规的 input states 一样,关于引用类型的 states 的起源链能够被解决,并且所有依赖的交易也会被验证。这是因为引用数据的用户必须要满意他们引用的数据基于管理它的合约代码定义的规则来说是有效的,并且这个 state 的左右以前的参与方都赞同了它的变更。
Known limitations:
已知的局限性:
Notary change: It is likely the case that users of reference states do not have permission to change the notary assigned to a reference state. Even if users did have this permission the result would likely be a bunch of notary change races. As such, if a reference state is added to a transaction which is assigned to a different notary to the input and output states then all those inputs and outputs must be moved to the notary which the reference state uses.
Notary 变更:
If two or more reference states assigned to different notaries are added to a transaction then it follows that this transaction cannot be committed to the ledger. This would also be the case for transactions not containing reference states. There is an additional complication for transactions including reference states; it is however, unlikely that the party using the reference states has the authority to change the notary for the state (in other words, the party using the reference state would not be listed as a participant on it). Therefore, it is likely that a transaction containing reference states with two different notaries cannot be committed to the ledger.
如果两个或者多个引用类型的 states 被分配了不同的 notaries,并且他们被添加到了同一个交易的时候,这笔交易是不能够被提交到账本的。这对于不包含引用类型的 states 的情况也是一样的。对于包含引用类型的 states 的情况会具有额外的复杂性;使用这个引用 states 的相关方是没有权限来更改这个 state 对应的 notary 的(换句话说,使用这个引用数据的相关方并没有被添加到这个 state 的参与者列表里)。因此,包含引用类型 states 的交易如果有两个不同的 notaries 的话,是不能够提交到账本的。
As such, if reference states assigned to multiple different notaries are added to a transaction builder then the check below will fail.
因此,如果引用类型的 states 被分配了多个不同的 notaries 并且被添加到一个 transaction builder 的时候,下边的检查是会失败的。
警告
Currently, encumbrances should not be used with reference states. In the case where a state is encumbered by an encumbrance state, the encumbrance state should also be referenced in the same transaction that references the encumbered state. This is because the data contained within the encumbered state may take on a different meaning, and likely would do, once the encumbrance state is taken into account.
警告
当前,encumbrances 不应该同引用类型的 states 一起使用。对于一个被 encumbrance state 阻碍的 state,encumbrance state 应该在同一个引用这个被阻碍的 state 的交易中被引用。这是因为在被阻碍的 state 中包含的数据可能具有不同的含义,并且一旦 encumbrance state 被考虑的话,会更可能发生。
State 指针¶
A StatePointer
contains a pointer to a ContractState
. The StatePointer
can be included in a ContractState
as a
property, or included in an off-ledger data structure. StatePointer
s can be resolved to a StateAndRef
by performing
a look-up. There are two types of pointers; linear and static.
StaticPointer
s are for use with any type ofContractState
. TheStaticPointer
does as it suggests, it always points to the sameContractState
.- The
LinearPointer
is for use with LinearStates. They are particularly useful because due to the way LinearStates work, the pointer will automatically point you to the latest version of a LinearState that the node performingresolve
is aware of. In effect, the pointer “moves” as the LinearState is updated.
一个 StatePointer
包含了一个指向 ContractState
的指针。StatePointer
可以作为一个属性被包含在一个 ContractState
里,或者被包含在一个 off-ledger 的数据结构中。StatePointer
可以通过执行一个查询被处理成一个 StateAndRef
。有两种类型的指针:linear 和 static。
StaticPointer
可以跟任何类型的ContractState
一起使用。StaticPointer
像他建议的那样,总是会指向相同的ContractState
。LinearPointer
是跟 LinearStates 共同使用的。由于 LinearStates 的工作方式,他们是非常有用的,指针将会自动地将你指向一个节点在执行resolve
所了解的 LinearState 的最新版本。因此,这个指针随着 LinearState 的更新而被 “移动”。
State pointers use Reference States
to enable the functionality described above. They can be conceptualized as a mechanism to
formalise a development pattern where one needs to refer to a specific state from another transaction (StaticPointer) or a particular lineage
of states (LinearPointer). In other words, StatePointers
do not enable a feature in Corda which was previously unavailable.
Rather, they help to formalise a pattern which was already possible. In that light, it is worth noting some issues which you may encounter
in its application:
- If the node calling
resolve
has not seen any transactions containing aContractState
which theStatePointer
points to, thenresolve
will throw an exception. Here, the node callingresolve
might be missing some crucial data. - The node calling
resolve
for aLinearPointer
may have seen and stored transactions containing aLinearState
with the specifiedlinearId
. However, there is no guarantee theStateAndRef<T>
returned byresolve
is the most recent version of theLinearState
. The node only returns the most recent version that _it_ is aware of.
State 指针使用 Reference States
来实现上边描述的功能。他们可以被概念化作为一个成为一种开发模式的机制,这种模式中一个 state 需要引用一个从来自于其他的 transaction(StaticPointer)或者一个特定的 states 的系列(LinearPointer)的一个指定的 state。换句话说,StatePointers
不会将 Corda 之前没有的功能变为可能。但是,它会帮助形成一个以前就有可能的模式。从这个角度看,你在它的应用程序中可能遇到的一些问题就没有什么意义了:
- 如果节点调用
resolve
而没有看到任何的交易包含一个StatePointer
所指向的ContractState
,那么resolve
将会抛出异常。这里,节点调用resolve
的时候可能忘记了一些重要的数据。 - 节点为
LinearPointer
调用resolve``可能会看到并且存储了包含一个带有指定的 ``linearId
的LinearState
的交易信息。然而,并没有谁能保证resolve
返回的StateAndRef<T>
是最新版本的LinearState
。节点只会返回 _它_ 所知道的最新版本。
Resolving state pointers in TransactionBuilder
在 TransactionBuilder 中处理 state 指针
When building transactions, any StatePointer
s contained within inputs or outputs added to a TransactionBuilder
can
be optionally resolved to reference states using the resolveStatePointers
method. The effect is that the pointed to
data is carried along with the transaction. This may or may not be appropriate in all circumstances, which is why
calling the method is optional.
当构建 transactions 的时候,任何被添加到一个 TransactionBuilder
的 inputs 或者 outputs 中包含的 StatePointer
,可以使用 resolveStatePointers
方法来被有选择地处理到一个引用的 states。这样做的效果就是被指向的数据会随着 transaction 被带走。这个在所有的情况下可能是合适或者是不合适的,这也就是为什么说这个方法是可选的。
API: 持久化¶
Corda offers developers the option to expose all or some parts of a contract state to an Object Relational Mapping (ORM) tool to be persisted in a Relational Database Management System (RDBMS).
Corda 为开发者提供了一种方式来将 contract state 的全部或部分暴露给一个 *对象关系映射*(ORM) 工具来将其持久化到一个 *关系型数据库管理系统*(RDBMS) 中。
The purpose of this, is to assist Vault development and allow for the persistence of state data to a custom database table. Persisted states held in the vault are indexed for the purposes of executing queries. This also allows for relational joins between Corda tables and the organization’s existing data.
这样做的目的是帮助 Vault 开发以及允许将 state 数据进行持久化到自定义的数据库表中。对 vault 中存储的持久化的 state 为了执行查询的目的而建立有效的索引。这也允许了在 Corda 表和组织已经存在的数据之间进行关联。
The Object Relational Mapping is specified using Java Persistence API (JPA) annotations. This mapping is persisted to the database as a table row (a single, implicitly structured data item) by the node automatically every time a state is recorded in the node’s local vault as part of a transaction.
对象关系映射是通过使用 Java Persistence API (JPA) 的注解来指定的,当每次一个 state 作为 transaction 的一部分被记录到本地的 vault 中的时候,它会被节点自动地转换成数据库表中的记录(一个单独的,有明确结构的数据项)。
注解
By default, nodes use an H2 database which is accessed using Java Database Connectivity JDBC. Any database with a JDBC driver is a candidate and several integrations have been contributed to by the community. Please see the info in “节点数据库” for details.
注解
默认的,节点使用一个 H2 数据库,这可以通过使用 Java Database Connectivity JDBC 来访问。任何提供 JDBC driver 的数据库都可以作为备选项并且社区已经贡献了多个集成方式。请浏览 “节点数据库” 查看详细内容。
Schemas¶
Every ContractState
may implement the QueryableState
interface if it wishes to be inserted into a custom table in the node’s
database and made accessible using SQL.
如果一个 ContractState
想要被插入到节点数据库中的一个自定义表中并且可以使用 SQL 来访问的话,那么它就可能会实现 QueryableState
接口。
/**
* A contract state that may be mapped to database schemas configured for this node to support querying for,
* or filtering of, states.
*/
@KeepForDJVM
interface QueryableState : ContractState {
/**
* Enumerate the schemas this state can export representations of itself as.
*/
fun supportedSchemas(): Iterable<MappedSchema>
/**
* Export a representation for the given schema.
*/
fun generateMappedObject(schema: MappedSchema): PersistentState
}
The QueryableState
interface requires the state to enumerate the different relational schemas it supports, for
instance in situations where the schema has evolved. Each relational schema is represented as a MappedSchema
object returned by the state’s supportedSchemas
method.
QueryableState
接口要求 State 需要遍历它所支持的不同的关系型 schemas,比如在 schema 已经被更新了的情况下。每一个关系型 schema 会以由 state 的 supportedSchemas
的方法返回的 MappedSchema
对象的形式被展现。
Nodes have an internal SchemaService
which decides what data to persist by selecting the MappedSchema
to use.
Once a MappedSchema
is selected, the SchemaService
will delegate to the QueryableState
to generate a corresponding
representation (mapped object) via the generateMappedObject
method, the output of which is then passed to the ORM.
节点有一个内部的 SchemaService
,该服务通过使用选择的 MappedSchema
来决定过了什么会被持久化,什么不会。一旦一个 MappedSchema
被选择了之后,SchemaService
将会委派 QueryableState
使用 generateMappedObject
方法来生成一个相对应的展示(mapped 对象),它的 output 接下来会被传入 ORM。
/**
* A configuration and customisation point for Object Relational Mapping of contract state objects.
*/
interface SchemaService {
/**
* Represents any options configured on the node for a schema.
*/
data class SchemaOptions(val databaseSchema: String? = null, val tablePrefix: String? = null)
/**
* Options configured for this node's schemas. A missing entry for a schema implies all properties are null.
*/
val schemaOptions: Map<MappedSchema, SchemaOptions>
/**
* Given a state, select schemas to map it to that are supported by [generateMappedObject] and that are configured
* for this node.
*/
fun selectSchemas(state: ContractState): Iterable<MappedSchema>
/**
* Map a state to a [PersistentState] for the given schema, either via direct support from the state
* or via custom logic in this service.
*/
fun generateMappedObject(state: ContractState, schema: MappedSchema): PersistentState
}
/**
* A database schema that might be configured for this node. As well as a name and version for identifying the schema,
* also list the classes that may be used in the generated object graph in order to configure the ORM tool.
*
* @param schemaFamily A class to fully qualify the name of a schema family (i.e. excludes version)
* @param version The version number of this instance within the family.
* @param mappedTypes The JPA entity classes that the ORM layer needs to be configure with for this schema.
*/
@KeepForDJVM
open class MappedSchema(schemaFamily: Class<*>,
val version: Int,
val mappedTypes: Iterable<Class<*>>) {
val name: String = schemaFamily.name
/**
* Optional classpath resource containing the database changes for the [mappedTypes]
*/
open val migrationResource: String? = null
override fun toString(): String = "${this.javaClass.simpleName}(name=$name, version=$version)"
override fun equals(other: Any?): Boolean {
if (this === other) return true
if (javaClass != other?.javaClass) return false
other as MappedSchema
if (version != other.version) return false
if (mappedTypes != other.mappedTypes) return false
if (name != other.name) return false
return true
}
override fun hashCode(): Int {
var result = version
result = 31 * result + mappedTypes.hashCode()
result = 31 * result + name.hashCode()
return result
}
}
With this framework, the relational view of ledger states can evolve in a controlled fashion in lock-step with internal systems or other integration points and is not dependant on changes to the contract code.
通过这个框架,ledger state 的关系型的视图能够通过一种可控的方式进行更新,同内部系统或者其他的集成点进行 lock-step,并且者不依赖于合约代码的变更。
It is expected that multiple contract state implementations might provide mappings within a single schema. For example an Interest Rate Swap contract and an Equity OTC Option contract might both provide a mapping to a Derivative contract within the same schema. The schemas should typically not be part of the contract itself and should exist independently to encourage re-use of a common set within a particular business area or Cordapp.
多个不同的 contract state 实现可能会提供跟一些常规 schema 的映射,这个是被期望出现的。例如一个汇率交换合约和一个 Equity OTC Option contract 可能都提供一个跟常见的 Derivative Schema 的映射。这个 schema 通常不应该是 contract 的一部分,并且应该是独立存在的,这样会鼓励针对于特定的业务领域或者 CorDapp 可以对常用部分进行重用。
注解
It’s advisable to avoid cross-references between different schemas as this may cause issues when evolving MappedSchema
or migrating its data. At startup, nodes log such violations as warnings stating that there’s a cross-reference between MappedSchema
’s.
The detailed messages incorporate information about what schemas, entities and fields are involved.
注解
建议避免再不同的 schemas 之间进行引用,因为这个可能会在升级 MappedSchema
或者迁移它的数据的时候造成问题。在最开始的时候,节点会将这类冲突作为 warning log 下来,会说明这里有跨 MappedSchema
引用。详细的信息中会看到是哪些 schemas、entities 和字段。
MappedSchema
offer a family name that is disambiguated using Java package style name-spacing derived from the
class name of a schema family class that is constant across versions, allowing the SchemaService
to select a
preferred version of a schema.
MappedSchema
提供了一个 family name,该 family name 通过使用 Java package 形式的 name-spacing 来规避模糊的定义,这个 name-spacing 来自于一个在不同的版本中始终不变的 schema family 类的类名,这就允许了 SchemaService
可以选择一个喜欢的 schema 的版本。
The SchemaService
is also responsible for the SchemaOptions
that can be configured for a particular
MappedSchema
. These allow the configuration of database schemas or table name prefixes to avoid clashes with
other MappedSchema
.
SchemaService
同样也要负责提供一个 SchemaOptions
,这对于一个指定的 MappedSchema
是可以配置的,这就允许了对于一个数据库的 schema 或者表名前缀进行配置,以此来避免同其他的 MappedSchema
的任何冲突。
注解
It is intended that there should be plugin support for the SchemaService
to offer version upgrading, implementation
of additional schemas, and enable active schemas as being configurable. The present implementation does not include these features
and simply results in all versions of all schemas supported by a QueryableState
being persisted.
This will change in due course. Similarly, the service does not currently support
configuring SchemaOptions
but will do so in the future.
注解
我们希望这里应该有对 SchemaService
的 plugin 支持,来提供版本的更新、额外的 schemas 的实现,并且能够让 active schemas 能够按照配置变为可用。但是当前的实现并没有提供这些,造成了 QueryableState
所支持的所有 schemas 的所有版本都被持久化。这会在将来被改变。类似的,当前也不支持配置 SchemaOptions
但是将来是会支持的。
注册自定义的 schema¶
Custom contract schemas are automatically registered at startup time for CorDapps. The node bootstrap process will scan for states that implement
the Queryable state interface. Tables are then created as specified by the MappedSchema
identified by each state’s supportedSchemas
method.
自定义的 contract schemas 会在 CorDapps 启动的时候被自动注册。节点的启动过程会扫描实现了 Queryable state 接口的 states。接下来会按照 MappedSchema
指明的那样创建表,这些表会根据每个 state 的 supportedSchemas
方法被识别。
For testing purposes it is necessary to manually register the packages containing custom schemas as follows:
- Tests using
MockNetwork
andMockNode
must explicitly register packages using the cordappPackages parameter ofMockNetwork
- Tests using
MockServices
must explicitly register packages using the cordappPackages parameter of theMockServices
makeTestDatabaseAndMockServices() helper method.
为了测试的目的,像下边这样手动地注册包含自定义的 schemas 的包是有必要的:
- 使用
MockNetwork
和MockNode
的测试必须要显式地使用MockNode
的 cordappPackages 参数来注册包 - 使用
MockServices
的测试必须要显式地使用MockServices
makeTestDatabaseAndMockServices()` helper 方法的 cordappPackages 参数注册包
注解
Tests using the DriverDSL will automatically register your custom schemas if they are in the same project structure as the driver call.
注解
使用 DriverDSL 的测试,如果他们在相同的项目结构下的话,当 driver 在调用的时候,他们会自动注册你的自定义的 schemas。
对象关系映射¶
To facilitate the ORM, the persisted representation of a QueryableState
should be an instance of a PersistentState
subclass,
constructed either by the state itself or a plugin to the SchemaService
. This allows the ORM layer to always
associate a StateRef
with a persisted representation of a ContractState
and allows joining with the set of
unconsumed states in the vault.
为了协助 ORM,一个 QueryableState
的被持久化的表述应该是一个 PersistentState
的子类的一个实例,可以通过 state 本身或者 SchemaService
的 plugin 来构建。这就允许了 ORM 层永远会将一个 ContractState
的持久化表现和一个 StateRef
相关联,并且允许把 vault 中的不同的未消费的 states 进行 joining。
The PersistentState
subclass should be marked up as a JPA 2.1 Entity with a defined table name and having
properties (in Kotlin, getters/setters in Java) annotated to map to the appropriate columns and SQL types. Additional
entities can be included to model these properties where they are more complex, for example collections (Persisting Hierarchical Data), so
the mapping does not have to be flat. The MappedSchema
constructor accepts a list of all JPA entity classes for that schema in
the MappedTypes
parameter. It must provide this list in order to initialise the ORM layer.
PersistentState
子类应该被定义为一个 JPA 2.1 Entity,这个 entity 应该有一个定义好的 table name 并且还应该有一些属性(Kotlin 中是属性,Java 中是 getters/setters)来映射成为合适的列和 SQL 类型。其他的 entities 可以被包含来形成一些复杂的属性,比如集合(持久化结构化数据),所以这个映射可以不是 扁平 的。MappedSchema
构造函数接收一个在 MappedTypes
参数中的 schema 的所有 JPA entity 类的列表。必须要提供跟这个列表,以此来初始化这个 ORM 层。
Several examples of entities and mappings are provided in the codebase, including Cash.State
and
CommercialPaper.State
. For example, here’s the first version of the cash schema.
基础代码中提供了一些 entities 和映射的例子,包括 Cash.State
和 CommercialPaper.State
。例如,下边是 cash schema 的第一个版本。
package net.corda.finance.schemas
import net.corda.core.identity.AbstractParty
import net.corda.core.schemas.MappedSchema
import net.corda.core.schemas.PersistentState
import net.corda.core.serialization.CordaSerializable
import net.corda.core.utilities.MAX_HASH_HEX_SIZE
import net.corda.core.contracts.MAX_ISSUER_REF_SIZE
import org.hibernate.annotations.Type
import javax.persistence.*
/**
* An object used to fully qualify the [CashSchema] family name (i.e. independent of version).
*/
object CashSchema
/**
* First version of a cash contract ORM schema that maps all fields of the [Cash] contract state as it stood
* at the time of writing.
*/
@CordaSerializable
object CashSchemaV1 : MappedSchema(schemaFamily = CashSchema.javaClass, version = 1, mappedTypes = listOf(PersistentCashState::class.java)) {
override val migrationResource = "cash.changelog-master"
@Entity
@Table(name = "contract_cash_states", indexes = [Index(name = "ccy_code_idx", columnList = "ccy_code"), Index(name = "pennies_idx", columnList = "pennies")])
class PersistentCashState(
/** X500Name of owner party **/
@Column(name = "owner_name", nullable = true)
var owner: AbstractParty?,
@Column(name = "pennies", nullable = false)
var pennies: Long,
@Column(name = "ccy_code", length = 3, nullable = false)
var currency: String,
@Column(name = "issuer_key_hash", length = MAX_HASH_HEX_SIZE, nullable = false)
var issuerPartyHash: String,
@Column(name = "issuer_ref", length = MAX_ISSUER_REF_SIZE, nullable = false)
@Type(type = "corda-wrapper-binary")
var issuerRef: ByteArray
) : PersistentState()
}
注解
If Cordapp needs to be portable between Corda OS (running against H2) and Corda Enterprise (running against a standalone database), consider database vendors specific requirements. Ensure that table and column names are compatible with the naming convention of the database vendors for which the Cordapp will be deployed, e.g. for Oracle database, prior to version 12.2 the maximum length of table/column name is 30 bytes (the exact number of characters depends on the database encoding).
注解
如果 CorDapp 需要在 Corda 开源版本(运行着 H2)和 Corda 企业版(运行着一个独立的数据库)之间可导入导出的话,需要考虑数据库提供商特殊的需求。确保表和列的名字能够兼容 CorDapp 将要被部署的数据库的 vendors 的命名规则,比如对于 Oracle 数据库,在 12.2 之前的版本,表/列名字的最大长度是 30 字节(确切的字符数量依赖于数据库的编码方式)。
持久化结构化数据¶
You may wish to persist hierarchical relationships within state data using multiple database tables. In order to facillitate this, multiple PersistentState
subclasses may be implemented. The relationship between these classes is defined using JPA annotations. It is important to note that the MappedSchema
constructor requires a list of all of these subclasses.
你可能想使用多个数据库表来持久化 state 数据中的结构化的关系。为了协助这个,多个 PersistentState
子类可能会被实现。这些类之间的关系是通过使用 JPA 注解的方式来定义的。需要注意的是 MappedSchema
构造函数需要 所有 这些子类的列表。
An example Schema implementing hierarchical relationships with JPA annotations has been implemented below. This Schema will cause parent_data
and child_data
tables to be
created.
下边是一个使用 JPA 注解来实现结构化关系的 Schema 实现。这个 Schema 将会创建 parent_data
和 child_data
表。
@CordaSerializable
public class SchemaV1 extends MappedSchema {
/**
* This class must extend the MappedSchema class. Its name is based on the SchemaFamily name and the associated version number abbreviation (V1, V2... Vn).
* In the constructor, use the super keyword to call the constructor of MappedSchema with the following arguments: a class literal representing the schema family,
* a version number and a collection of mappedTypes (class literals) which represent JPA entity classes that the ORM layer needs to be configured with for this schema.
*/
public SchemaV1() {
super(Schema.class, 1, ImmutableList.of(PersistentParentToken.class, PersistentChildToken.class));
}
/**
* The @entity annotation signifies that the specified POJO class' non-transient fields should be persisted to a relational database using the services
* of an entity manager. The @table annotation specifies properties of the table that will be created to contain the persisted data, in this case we have
* specified a name argument which will be used the table's title.
*/
@Entity
@Table(name = "parent_data")
public static class PersistentParentToken extends PersistentState {
/**
* The @Column annotations specify the columns that will comprise the inserted table and specify the shape of the fields and associated
* data types of each database entry.
*/
@Column(name = "owner") private final String owner;
@Column(name = "issuer") private final String issuer;
@Column(name = "amount") private final int amount;
@Column(name = "linear_id") public final UUID linearId;
/**
* The @OneToMany annotation specifies a one-to-many relationship between this class and a collection included as a field.
* The @JoinColumn and @JoinColumns annotations specify on which columns these tables will be joined on.
*/
@OneToMany(cascade = CascadeType.PERSIST)
@JoinColumns({
@JoinColumn(name = "output_index", referencedColumnName = "output_index"),
@JoinColumn(name = "transaction_id", referencedColumnName = "transaction_id"),
})
private final List<PersistentChildToken> listOfPersistentChildTokens;
public PersistentParentToken(String owner, String issuer, int amount, UUID linearId, List<PersistentChildToken> listOfPersistentChildTokens) {
this.owner = owner;
this.issuer = issuer;
this.amount = amount;
this.linearId = linearId;
this.listOfPersistentChildTokens = listOfPersistentChildTokens;
}
// Default constructor required by hibernate.
public PersistentParentToken() {
this.owner = "";
this.issuer = "";
this.amount = 0;
this.linearId = UUID.randomUUID();
this.listOfPersistentChildTokens = null;
}
public String getOwner() {
return owner;
}
public String getIssuer() {
return issuer;
}
public int getAmount() {
return amount;
}
public UUID getLinearId() {
return linearId;
}
public List<PersistentChildToken> getChildTokens() { return listOfPersistentChildTokens; }
}
@Entity
@CordaSerializable
@Table(name = "child_data")
public static class PersistentChildToken {
// The @Id annotation marks this field as the primary key of the persisted entity.
@Id
private final UUID Id;
@Column(name = "owner")
private final String owner;
@Column(name = "issuer")
private final String issuer;
@Column(name = "amount")
private final int amount;
/**
* The @ManyToOne annotation specifies that this class will be present as a member of a collection on a parent class and that it should
* be persisted with the joining columns specified in the parent class. It is important to note the targetEntity parameter which should correspond
* to a class literal of the parent class.
*/
@ManyToOne(targetEntity = PersistentParentToken.class)
private final TokenState persistentParentToken;
public PersistentChildToken(String owner, String issuer, int amount) {
this.Id = UUID.randomUUID();
this.owner = owner;
this.issuer = issuer;
this.amount = amount;
this.persistentParentToken = null;
}
// Default constructor required by hibernate.
public PersistentChildToken() {
this.Id = UUID.randomUUID();
this.owner = "";
this.issuer = "";
this.amount = 0;
this.persistentParentToken = null;
}
public UUID getId() {
return Id;
}
public String getOwner() {
return owner;
}
public String getIssuer() {
return issuer;
}
public int getAmount() {
return amount;
}
public TokenState getPersistentToken() {
return persistentToken;
}
}
}
@CordaSerializable
object SchemaV1 : MappedSchema(schemaFamily = Schema::class.java, version = 1, mappedTypes = listOf(PersistentParentToken::class.java, PersistentChildToken::class.java)) {
@Entity
@Table(name = "parent_data")
class PersistentParentToken(
@Column(name = "owner")
var owner: String,
@Column(name = "issuer")
var issuer: String,
@Column(name = "amount")
var currency: Int,
@Column(name = "linear_id")
var linear_id: UUID,
@JoinColumns(JoinColumn(name = "transaction_id", referencedColumnName = "transaction_id"), JoinColumn(name = "output_index", referencedColumnName = "output_index"))
var listOfPersistentChildTokens: MutableList<PersistentChildToken>
) : PersistentState()
@Entity
@CordaSerializable
@Table(name = "child_data")
class PersistentChildToken(
@Id
var Id: UUID = UUID.randomUUID(),
@Column(name = "owner")
var owner: String,
@Column(name = "issuer")
var issuer: String,
@Column(name = "amount")
var currency: Int,
@Column(name = "linear_id")
var linear_id: UUID,
@ManyToOne(targetEntity = PersistentParentToken::class)
var persistentParentToken: TokenState
) : PersistentState()
身份信息映射¶
Schema entity attributes defined by identity types (AbstractParty
, Party
, AnonymousParty
) are automatically
processed to ensure only the X500Name
of the identity is persisted where an identity is well known, otherwise a null
value is stored in the associated column. To preserve privacy, identity keys are never persisted. Developers should use
the IdentityService
to resolve keys from well know X500 identity names.
由 identity 类型定义的 schema entity 属性(AbstractParty
,Party
,AnonymousParty
)会被自动处理来确保当一个 identity 是 well known 的时候,只有 X500Name
的身份信息会被持久化,否则一个 null 值会被存储到相关的 column 中。为了保持隐私性,identity keys 从来不会被持久化。开发者应该使用 ··IdentityService·· 来从 well know X500 identity names 中找到 keys。
JDBC session¶
Apps may also interact directly with the underlying Node’s database by using a standard JDBC connection (session) as described by the Java SQL Connection API
Apps 也有可能会使用一个像 Java SQL Connection API 中描述的那种标准的 JDBC 连接(session)来直接地跟节点底层的数据库进行交互。
Use the ServiceHub
jdbcSession
function to obtain a JDBC connection as illustrated in the following example:
使用 ServiceHub
jdbcSession
方法像下边这样获得一个 JDBC 连接:
val nativeQuery = "SELECT v.transaction_id, v.output_index FROM vault_states v WHERE v.state_status = 0"
database.transaction {
val jdbcSession = services.jdbcSession()
val prepStatement = jdbcSession.prepareStatement(nativeQuery)
val rs = prepStatement.executeQuery()
JDBC sessions can be used in flows and services (see “Writing flows”).
JDBC session 可以在 Flows 和 Service Plugins 中被使用(查看 Writing flows)。
The following example illustrates the creation of a custom Corda service using a jdbcSession
:
下边的例子展示了使用一个 jdbcSession
来创建一个自定义的 corda service:
object CustomVaultQuery {
@CordaService
class Service(val services: AppServiceHub) : SingletonSerializeAsToken() {
private companion object {
private val log = contextLogger()
}
fun rebalanceCurrencyReserves(): List<Amount<Currency>> {
val nativeQuery = """
select
cashschema.ccy_code,
sum(cashschema.pennies)
from
vault_states vaultschema
join
contract_cash_states cashschema
where
vaultschema.output_index=cashschema.output_index
and vaultschema.transaction_id=cashschema.transaction_id
and vaultschema.state_status=0
group by
cashschema.ccy_code
order by
sum(cashschema.pennies) desc
"""
log.info("SQL to execute: $nativeQuery")
val session = services.jdbcSession()
return session.prepareStatement(nativeQuery).use { prepStatement ->
prepStatement.executeQuery().use { rs ->
val topUpLimits: MutableList<Amount<Currency>> = mutableListOf()
while (rs.next()) {
val currencyStr = rs.getString(1)
val amount = rs.getLong(2)
log.info("$currencyStr : $amount")
topUpLimits.add(Amount(amount, Currency.getInstance(currencyStr)))
}
topUpLimits
}
}
}
}
}
which is then referenced within a custom flow:
它会像下边这样在一个自定义的 flow 里被引用:
@Suspendable
@Throws(CashException::class)
override fun call(): List<SignedTransaction> {
progressTracker.currentStep = AWAITING_REQUEST
val topupRequest = otherPartySession.receive<TopupRequest>().unwrap {
it
}
val customVaultQueryService = serviceHub.cordaService(CustomVaultQuery.Service::class.java)
val reserveLimits = customVaultQueryService.rebalanceCurrencyReserves()
val txns: List<SignedTransaction> = reserveLimits.map { amount ->
// request asset issue
logger.info("Requesting currency issue $amount")
val txn = issueCashTo(amount, topupRequest.issueToParty, topupRequest.issuerPartyRef, topupRequest.notaryParty)
progressTracker.currentStep = SENDING_TOP_UP_ISSUE_REQUEST
return@map txn.stx
}
otherPartySession.send(txns)
return txns
}
For examples on testing @CordaService
implementations, see the oracle example here.
例如,当测试 @CordaService
实现的时候,查看 oracle 的例子 here。
JPA 支持¶
In addition to jdbcSession
, ServiceHub
also exposes the Java Persistence API to flows via the withEntityManager
method. This method can be used to persist and query entities which inherit from MappedSchema
. This is particularly
useful if off-ledger data must be maintained in conjunction with on-ledger state data.
除了 jdbcSession
,ServiceHub
也通过 withEntityManager
方法向 flows 暴露了 Java 持久化 API。这个方法可以用来持久化和查询继承自 MappedSchema
的 entities。这对于 off-ledger 数据必须同 on-ledger state 数据共同维护的情况是非常有用的。
注解
Your entity must be included as a mappedType as part of a
MappedSchema
for it to be added to Hibernate as a custom schema. If it’s not included as a mappedType, a corresponding table will not be created. See Samples below.
注解
你的 entity 必须以一个 mappedType 的方式被包含并作为一个 MappedSchema
的一部分,以此它会作为一个自定义的 schema 被添加到 Hibernate。如果它没有作为一个 mappedType 被包含的话,一个相对应的表示不会被创建的。看看下边的例子。
The code snippet below defines a PersistentFoo
type inside FooSchemaV1
. Note that PersistentFoo
is added to
a list of mapped types which is passed to MappedSchema
. This is exactly how state schemas are defined, except that
the entity in this case should not subclass PersistentState
(as it is not a state object). See examples:
下边的代码在 FooSchemaV1
里定义了一个 PersistentFoo
类型。需要注意的是,PersistentFoo
被添加到了一个会被发送给 MappedSchema
的 mapped 类型的列表中。这就是 state schemas 是如何被定义的,除了这个情况里的 entity 不应该作为 PersistentState
的子类(因为它不是一个 state 对象)。
public class FooSchema {}
public class FooSchemaV1 extends MappedSchema {
FooSchemaV1() {
super(FooSchema.class, 1, ImmutableList.of(PersistentFoo.class));
}
@Entity
@Table(name = "foos")
class PersistentFoo implements Serializable {
@Id
@Column(name = "foo_id")
String fooId;
@Column(name = "foo_data")
String fooData;
}
}
object FooSchema
object FooSchemaV1 : MappedSchema(schemaFamily = FooSchema.javaClass, version = 1, mappedTypes = listOf(PersistentFoo::class.java)) {
@Entity
@Table(name = "foos")
class PersistentFoo(@Id @Column(name = "foo_id") var fooId: String, @Column(name = "foo_data") var fooData: String) : Serializable
}
Instances of PersistentFoo
can be manually persisted inside a flow as follows:
PersistentFoo
的实例能够向下边这样在一个 flow 中被手动地持久化:
PersistentFoo foo = new PersistentFoo(new UniqueIdentifier().getId().toString(), "Bar");
serviceHub.withEntityManager(entityManager -> {
entityManager.persist(foo);
return null;
});
val foo = FooSchemaV1.PersistentFoo(UniqueIdentifier().id.toString(), "Bar")
serviceHub.withEntityManager {
persist(foo)
}
And retrieved via a query, as follows:
并且像下边这样通过一个查询取回:
node.getServices().withEntityManager((EntityManager entityManager) -> {
CriteriaQuery<PersistentFoo> query = entityManager.getCriteriaBuilder().createQuery(PersistentFoo.class);
Root<PersistentFoo> type = query.from(PersistentFoo.class);
query.select(type);
return entityManager.createQuery(query).getResultList();
});
val result: MutableList<FooSchemaV1.PersistentFoo> = services.withEntityManager {
val query = criteriaBuilder.createQuery(FooSchemaV1.PersistentFoo::class.java)
val type = query.from(FooSchemaV1.PersistentFoo::class.java)
query.select(type)
createQuery(query).resultList
}
Please note that suspendable flow operations such as:
注意可挂起的 flow 操作,比如:
FlowSession.send
FlowSession.receive
FlowLogic.receiveAll
FlowLogic.sleep
FlowLogic.subFlow
Cannot be used within the lambda function passed to withEntityManager
.
不能够在传递给 withEntityManager
的 lambda 方法中使用。
API: Contracts¶
注解
Before reading this page, you should be familiar with the key concepts of Contracts.
注解
当你阅读这里的时候,你应该已经熟悉了核心概念 Contracts。
Contract¶
Contracts are classes that implement the Contract
interface. The Contract
interface is defined as follows:
Contracts 都是实现了 Contract
接口的类。Contract
接口定义如下:
/**
* Implemented by a program that implements business logic on the shared ledger. All participants run this code for
* every [net.corda.core.transactions.LedgerTransaction] they see on the network, for every input and output state. All
* contracts must accept the transaction for it to be accepted: failure of any aborts the entire thing. The time is taken
* from a trusted time-window attached to the transaction itself i.e. it is NOT necessarily the current time.
*
* TODO: Contract serialization is likely to change, so the annotation is likely temporary.
*/
@KeepForDJVM
@CordaSerializable
interface Contract {
/**
* Takes an object that represents a state transition, and ensures the inputs/outputs/commands make sense.
* Must throw an exception if there's a problem that should prevent state transition. Takes a single object
* rather than an argument so that additional data can be added without breaking binary compatibility with
* existing contract code.
*/
@Throws(IllegalArgumentException::class)
fun verify(tx: LedgerTransaction)
}
Contract
has a single method, verify
, which takes a LedgerTransaction
as input and returns
nothing. This function is used to check whether a transaction proposal is valid, as follows:
- We gather together the contracts of each of the transaction’s input and output states
- We call each contract’s
verify
function, passing in the transaction as an input - The proposal is only valid if none of the
verify
calls throw an exception
Contract
只有一个 verify
方法,它会有一个 LedgerTransaction
作为 input 参数并且不会返回任何内容。这个方法被用来检验一个交易的提议是否有效,包括下边的验证:
- 我们会搜集这个交易的 input 和 output states 的 contract code
- 我们会调用每个 contract code 的
verify
方法,将 transaction 作为 input 传进去 - 这个更新账本的提议仅仅在所有的 verify 方法都没有返回 exception 的情况下才算是有效的
verify
is executed in a sandbox:
- It does not have access to the enclosing scope
- The libraries available to it are whitelisted to disallow: * Network access * I/O such as disk or database access * Sources of randomness such as the current time or random number generators
verify
是在一个 sandbox 中执行的:
- 它没有权限访问内部的内容
- 针对于它可用的类库被放入白名单来不允许: * 网络访问 * 硬盘或数据库访问的 I/O * 随机的资源比如当前的时间或者随机数生成器
This means that verify
only has access to the properties defined on LedgerTransaction
when deciding whether a
transaction is valid.
这意味着 verify
仅仅能够在决定一个交易是否有效的时候才能够访问 LedgerTransaction
中定义的属性。
Here are the two simplest verify
functions:
最简单的两个 verify 方法:
- A
verify
that accepts all possible transactions: - 一个
verify
接受 所有可能的 transactions:
override fun verify(tx: LedgerTransaction) {
// Always accepts!
}
@Override
public void verify(LedgerTransaction tx) {
// Always accepts!
}
- A
verify
that rejects all possible transactions: - 一个
verify
拒绝 所有的 transactions:
override fun verify(tx: LedgerTransaction) {
throw IllegalArgumentException("Always rejects!")
}
@Override
public void verify(LedgerTransaction tx) {
throw new IllegalArgumentException("Always rejects!");
}
LedgerTransaction¶
The LedgerTransaction
object passed into verify
has the following properties:
被传入 verify 方法中的 LedgerTransaction
对象具有以下属性:
/** The resolved input states which will be consumed/invalidated by the execution of this transaction. */
override val inputs: List<StateAndRef<ContractState>>,
/** The outputs created by the transaction. */
override val outputs: List<TransactionState<ContractState>>,
/** Arbitrary data passed to the program of each input state. */
val commands: List<CommandWithParties<CommandData>>,
/** A list of [Attachment] objects identified by the transaction that are needed for this transaction to verify. */
val attachments: List<Attachment>,
/** The hash of the original serialised WireTransaction. */
override val id: SecureHash,
/** The notary that the tx uses, this must be the same as the notary of all the inputs, or null if there are no inputs. */
override val notary: Party?,
/** The time window within which the tx is valid, will be checked against notary pool member clocks. */
val timeWindow: TimeWindow?,
/** Random data used to make the transaction hash unpredictable even if the contents can be predicted; needed to avoid some obscure attacks. */
val privacySalt: PrivacySalt,
/**
* Network parameters that were in force when the transaction was constructed. This is nullable only for backwards
* compatibility for serialized transactions. In reality this field will always be set when on the normal codepaths.
*/
override val networkParameters: NetworkParameters?,
/** Referenced states, which are like inputs but won't be consumed. */
override val references: List<StateAndRef<ContractState>>
Where:
inputs
are the transaction’s inputs asList<StateAndRef<ContractState>>
outputs
are the transaction’s outputs asList<TransactionState<ContractState>>
commands
are the transaction’s commands and associated signers, asList<CommandWithParties<CommandData>>
attachments
are the transaction’s attachments asList<Attachment>
notary
is the transaction’s notary. This must match the notary of all the inputstimeWindow
defines the window during which the transaction can be notarised
其中:
inputs
是类型为List<StateAndRef<ContractState>>
的 transaction 的 inputsoutputs
是类型为List<TransactionState<ContractState>>
的 transaction 的 outputscommands
是类型为List<CommandWithParties<CommandData>>
的 transaction 的 commands 和相关的签名者attachments
是类型为List<Attachment>
的 transaction 的 attachmentsnotary
是 transaction 的 notary。这个必须要同所有的 inputs 拥有相同的 notarytimeWindow
定义了一笔交易在什么样的时间窗内才会被公正
LedgerTransaction
exposes a large number of utility methods to access the transaction’s contents:
inputStates
extracts the inputContractState
objects from the list ofStateAndRef
getInput
/getOutput
/getCommand
/getAttachment
extracts a component by indexgetAttachment
extracts an attachment by IDinputsOfType
/inRefsOfType
/outputsOfType
/outRefsOfType
/commandsOfType
extracts components based on their generic typefilterInputs
/filterInRefs
/filterOutputs
/filterOutRefs
/filterCommands
extracts components based on a predicatefindInput
/findInRef
/findOutput
/findOutRef
/findCommand
extracts the single component that matches a predicate, or throws an exception if there are multiple matches
LedgerTransaction
暴漏了很多 utility 方法来访问交易的内容:
inputStates
从StateAndRef
列表中获得 inputContractState
对象getInput
/getOutput
/getCommand
/getAttachment
通过索引(index)来获得某个组件getAttachment
通过 ID 获得一个附件inputsOfType
/inRefsOfType
/outputsOfType
/outRefsOfType
/commandsOfType
基于他们的通用类型获得相关组件filterInputs
/filterInRefs
/filterOutputs
/filterOutRefs
/filterCommands
基于一个前提条件获得相关组件findInput
/findInRef
/findOutput
/findOutRef
/findCommand
获得满足一定条件的单一组件,或者当有多个满足条件的组件的时候抛出异常
requireThat¶
verify
can be written to manually throw an exception for each constraint:
verify
能够针对每一个约束手动地抛出异常:
override fun verify(tx: LedgerTransaction) {
if (tx.inputs.size > 0)
throw IllegalArgumentException("No inputs should be consumed when issuing an X.")
if (tx.outputs.size != 1)
throw IllegalArgumentException("Only one output state should be created.")
}
public void verify(LedgerTransaction tx) {
if (tx.getInputs().size() > 0)
throw new IllegalArgumentException("No inputs should be consumed when issuing an X.");
if (tx.getOutputs().size() != 1)
throw new IllegalArgumentException("Only one output state should be created.");
}
However, this is verbose. To impose a series of constraints, we can use requireThat
instead:
但是这个定义有些繁琐。我们可以使用 requireThat
来定义一系列的约束:
requireThat {
"No inputs should be consumed when issuing an X." using (tx.inputs.isEmpty())
"Only one output state should be created." using (tx.outputs.size == 1)
val out = tx.outputs.single() as XState
"The sender and the recipient cannot be the same entity." using (out.sender != out.recipient)
"All of the participants must be signers." using (command.signers.containsAll(out.participants))
"The X's value must be non-negative." using (out.x.value > 0)
}
requireThat(require -> {
require.using("No inputs should be consumed when issuing an X.", tx.getInputs().isEmpty());
require.using("Only one output state should be created.", tx.getOutputs().size() == 1);
final XState out = (XState) tx.getOutputs().get(0);
require.using("The sender and the recipient cannot be the same entity.", out.getSender() != out.getRecipient());
require.using("All of the participants must be signers.", command.getSigners().containsAll(out.getParticipants()));
require.using("The X's value must be non-negative.", out.getX().getValue() > 0);
return null;
});
For each <String
, Boolean
> pair within requireThat
, if the boolean condition is false, an
IllegalArgumentException
is thrown with the corresponding string as the exception message. In turn, this
exception will cause the transaction to be rejected.
对于 requireThat
中的每一个 <String
, Boolean
> 对来说,如果 boolean 条件返回的是 false,一个 IllegalArgumentException
会被抛出,包含对应的错误信息。所以这个错误会造成 transaction 被拒绝。
Commands¶
LedgerTransaction
contains the commands as a list of CommandWithParties
instances. CommandWithParties
pairs
a CommandData
with a list of required signers for the transaction:
LedgerTransaction
包含了作为 CommandWithParties
实例列表的 commands。CommandWithParties
将一个 CommandData
和一个所需的签名者列表匹配起来:
/** A [Command] where the signing parties have been looked up if they have a well known/recognised institutional key. */
@KeepForDJVM
@CordaSerializable
data class CommandWithParties<out T : CommandData>(
val signers: List<PublicKey>,
/** If any public keys were recognised, the looked up institutions are available here */
@Deprecated("Should not be used in contract verification code as it is non-deterministic, will be disabled for some future target platform version onwards and will take effect only for CorDapps targeting those versions.")
val signingParties: List<Party>,
val value: T
)
Where:
signers
is the list of each signer’sPublicKey
signingParties
is the list of the signer’s identities, if knownvalue
is the object being signed (a command, in this case)
其中:
signers
是每个签名者的PublicKey
的一个列表signingParties
签名者 identities 的列表,如果知道的话value
是被签名的对象(在这里指的是这个 command)
使用 commands 来处理 verify 分支¶
Generally, we will want to impose different constraints on a transaction based on its commands. For example, we will want to impose different constraints on a cash issuance transaction to on a cash transfer transaction.
通常来说,我们希望基于交易的 commands 来定义不同的约束条件。比如我们想要为一个现金发行的 transaction 和 一个现金交换的 transaction 定义不同的合约。
We can achieve this by extracting the command and using standard branching logic within verify
. Here, we extract
the single command of type XContract.Commands
from the transaction, and branch verify
accordingly:
我们可以通过提取这个 command 并在 verify
里使用标准的分支逻辑来实现这个功能。这里我们提取了交易中的类型为 XContract.Commands
的单独的 command,并且相应地对 verify
进行了分支逻辑判断:
class XContract : Contract {
interface Commands : CommandData {
class Issue : TypeOnlyCommandData(), Commands
class Transfer : TypeOnlyCommandData(), Commands
}
override fun verify(tx: LedgerTransaction) {
val command = tx.findCommand<Commands> { true }
when (command.value) {
is Commands.Issue -> {
// Issuance verification logic.
}
is Commands.Transfer -> {
// Transfer verification logic.
}
}
}
}
public class XContract implements Contract {
public interface Commands extends CommandData {
class Issue extends TypeOnlyCommandData implements Commands {}
class Transfer extends TypeOnlyCommandData implements Commands {}
}
@Override
public void verify(LedgerTransaction tx) {
final Commands command = tx.findCommand(Commands.class, cmd -> true).getValue();
if (command instanceof Commands.Issue) {
// Issuance verification logic.
} else if (command instanceof Commands.Transfer) {
// Transfer verification logic.
}
}
}
API: 合约约束¶
注解
Before reading this page, you should be familiar with the key concepts of Contracts.
注解
在阅读这里之前,你应该已经熟悉了核心概念 Contracts。
目录
Contract constraints solve two problems faced by any decentralised ledger that supports evolution of data and code:
- Controlling and agreeing upon upgrades
- Preventing attacks
合约约束 能够解决对于支持数据和代码更新的任何的去中心化的账本所遇到的两个问题:
- 控制和同意更新
- 防止攻击
Upgrades and security are intimately related because if an attacker can “upgrade” your data to a version of an app that gives them a back door, they would be able to do things like print money or edit states in any way they want. That’s why it’s important for participants of a state to agree on what kind of upgrades will be allowed.
升级和安全总是紧密地联系在一起,因为如果一个黑客能够把你的数据 “升级” 到某个版本来为他们提供一个后门的话,他们就能够去做诸如印钱或者按照任何他们想要的方式来变更 states。这就是为什么对于一个 state 的参与方,他们需要同意什么样的更新是被允许的变得尤为重要。
Every state on the ledger contains the fully qualified class name of a Contract
implementation, and also a constraint.
This constraint specifies which versions of an application can be used to provide the named class, when the transaction is built.
New versions released after a transaction is signed and finalised won’t affect prior transactions because the old code is attached
to it.
每个账本上的 state 包含了一个完全有效的一个 Contract
实现的类名,还包括一个 约束。这个约束在交易被构建的时候,指定了哪个版本的应用程序可以被用来提供这个已命名的类。在一笔交易被签名并且最终结束后产生的新版本是不会影响之前的交易的,因为旧的代码已经被附加在交易上了。
There are several types of constraint:
- Hash constraint: exactly one version of the app can be used with this state.
- Compatibility zone whitelisted (or CZ whitelisted) constraint: the compatibility zone operator lists the hashes of the versions that can be used with this contract class name.
- Signature constraint: any version of the app signed by the given
CompositeKey
can be used. - Always accept constraint: any app can be used at all. This is insecure but convenient for testing.
这里有不同类型的约束:
- Hash 约束:只有一个版本的应用程序可以同这个 state 一起使用
- Compatibility zone 白名单约束:Compatibility zone 维护者会列出所有可以跟这个合约类名一起使用的版本的 hash
- 签名约束:任何版本的由给定
CompositeKey
所签过名的应用程序可以被使用 - 总是接受的约束:一个永远都可以被使用的应用程序。这个不是安全的,但是对于测试是方便的
The actual app version used is defined by the attachments on a transaction that consumes a state: the JAR containing the state and contract classes, and optionally
its dependencies, are all attached to the transaction. Other nodes will download these JARs from a node if they haven’t seen them before,
so they can be used for verification. The TransactionBuilder
will manage the details of constraints for you, by selecting both constraints
and attachments to ensure they line up correctly. Therefore you only need to have a basic understanding of this topic unless you are
doing something sophisticated.
真正被使用的应用程序的版本是由消费一个 state 的一笔交易中的附件来定义的:这个 JAR 包含了 state 和 contract 类,并且还可能包含他们的依赖,他们全部被附加在交易中。其他的节点如果之前没有见过这些 JAR 的话会从这个节点下载这些 JARs,所以他们可以被用来做验证。TransactionBuilder
将会通过选择约束及附件来确保他们正确的相关联,以此来为你管理这些约束的详细。因此你只需要对这个话题有一个大概的理解就够了,除非你在做一些比较复杂的事情。
The best kind of constraint to use is the signature constraint. If you sign your application it will be used automatically. We recommend signature constraints because they let you smoothly migrate existing data to new versions of your application. Hash and zone whitelist constraints are left over from earlier Corda versions before signature constraints were implemented. They make it harder to upgrade applications than when using signature constraints, so they’re best avoided. Signature constraints can specify flexible threshold policies, but if you use the automatic support then a state will require the attached app to be signed by every key that the first attachment was signed by. Thus if the app that was used to issue the states was signed by Alice and Bob, every transaction must use an attachment signed by Alice and Bob.
最好的约束类型是 签名约束。如果你为你的应用程序签了名,那么它会被自动地使用。我们建议使用签名约束,因为它能够让你平缓地将已有的数据迁移到新版本的应用程序上。Hash 和 zone 白名单签名仅仅会遗留在签名约束还没有实现的早期的 Corda 版本中。他们相比较于签名约束,会使升级你的应用程序变得更加困难,所以最好还是避免使用他们。签名约束能够制定灵活的限定策略,但是如果你使用的是自动支持的话,一个 state 将会要求这个附加的应用程序需要这个附件第一次被签名的所有的公钥提供签名。因此,如果一个初始一个 state 的应用程序被 Alice 和 Bob 签过名了,那么每一笔交易都必须要使用被 Alice 和 Bob 签过名的附件。
Constraint propagation. Constraints are picked when a state is created for the first time in an issuance transaction. Once created, the constraint used by equivalent output states (i.e. output states that use the same contract class name) must match the input state, so it can’t be changed and you can’t combine states with incompatible constraints together in the same transaction.
约束的传递 当一个 state 在一个初始的交易中被第一次创建的时候,约束会被使用。一旦被创建,被同等的 output states(比如使用相同的 contract 类型的 output states) 所使用的约束必须要跟 input state 匹配,所以它就不能被改动了,并且你也不能够在相同的交易中把不兼容的约束的 states 合并到一起。
Implicit vs explicit. Constraints are not the only way to manage upgrades to transactions. There are two ways of handling upgrades to a smart contract in Corda:
- Implicit: By pre-authorising multiple implementations of the contract ahead of time, using constraints.
- Explicit: By creating a special contract upgrade transaction and getting all participants of a state to sign it using the contract upgrade flows.
隐式和显式 约束并不是管理交易升级的唯一的方式。在 Corda 中处理智能合约升级有两种方式:
- 隐式:使用约束,通过提前预授权关于合约的多个不同的实现。
- 显式:使用合约升级 flows,通过创建一个特殊的 合约升级交易 并且得到一个 state 的所有参与方的签名
This article focuses on the first approach. To learn about the second please see 升级 CorDapp.
这篇文章主要讨论第一种方式。查看 升级 CorDapp 来了解第二种方式。
The advantage of pre-authorising upgrades using constraints is that you don’t need the heavyweight process of creating upgrade transactions for every state on the ledger. The disadvantage is that you place more faith in third parties, who could potentially change the app in ways you did not expect or agree with. The advantage of using the explicit upgrade approach is that you can upgrade states regardless of their constraint, including in cases where you didn’t anticipate a need to do so. But it requires everyone to sign, requires everyone to manually authorise the upgrade, consumes notary and ledger resources, and is just in general more complex.
使用约束提前授权升级更新的优势是你不需要走一个非常繁琐的流程来为账本中的每个 state 创建一个升级的 transaction。缺点是你将更多的信任交给了合约开发的第三方,他们可能会按照你不期望的或者不同意的方式来改变这个 application。使用显式更新的好处是你可以不用去考虑他们的约束而去升级 states,也包括你不想参与一次升级的情况。但是这个流程需要每个人为其提供签名,需要每个人手动地为一次升级授权,消耗 notary 和账本资源,大体上来说是更加复杂的做法。
Contract/State Agreement¶
Starting with Corda 4, a ContractState
must explicitly indicate which Contract
it belongs to. When a transaction is
verified, the contract bundled with each state in the transaction must be its “owning” contract, otherwise we cannot guarantee that
the transition of the ContractState
will be verified against the business rules that should apply to it.
从 Corda 4.0 开始,一个 ContractState
必须要显式地说明它属于哪一个 Contract
。当一笔交易被验证的时候,contract 绑定的交易中的每个 state 必须要有它们 “自己的” contract,否则我们不能保证 ContractState
的交换能够按照它应该被使用的业务规则来验证。
There are two mechanisms for indicating ownership. One is to annotate the ContractState
with the BelongsToContract
annotation,
indicating the Contract
class to which it is tied:
这有两种表明所有权的机制。一种是向 ContractState
添加 BelongsToContract
的注解,说明它关联的是哪个 Contract
:
@BelongToContract(MyContract.class)
public class MyState implements ContractState {
// implementation goes here
}
@BelongsToContract(MyContract::class)
data class MyState(val value: Int) : ContractState {
// implementation goes here
}
The other is to define the ContractState
class as an inner class of the Contract
class
另外一种方式是在 Contract
类的内部定义 ContractState
类
public class MyContract implements Contract {
public static class MyState implements ContractState {
// state implementation goes here
}
// contract implementation goes here
}
class MyContract : Contract {
data class MyState(val value: Int) : ContractState
}
If a ContractState
’s owning Contract
cannot be identified by either of these mechanisms, and the targetVersion
of the
CorDapp is 4 or greater, then transaction verification will fail with a TransactionRequiredContractUnspecifiedException
. If
the owning Contract
can be identified, but the ContractState
has been bundled with a different contract, then
transaction verification will fail with a TransactionContractConflictException
.
如果一个 ContractState
所关联的 Contract
不能够通过这两种机制被识别出来,并且 CorDapp 的 targetVersion
是 4 或者更高的话,那么交易的验证就会失败,带有一个 TransactionRequiredContractUnspecifiedException
。如果所关联的 Contract
能够 被识别出来,但是 ContractState
已经被绑定到一个不同的 contract 的话,那么交易的验证会失败,带有一个 TransactionContractConflictException
。
带有签名约束的应用版本¶
Signed apps require a version number to be provided, see 版本. You can’t import two different
JARs that claim to be the same version, provide the same contract classes and which are both signed. At runtime
the node will throw a DuplicateContractClassException
exception if this condition is violated.
被签过名的应用需要提供一个版本编号,查看 版本。你不能够引用使用相同版本的不同的 JARs,提供相同的 contract 类并且他们都已经被签名了。如果这个条件没有满足的话,在运行时,节点会抛出 DuplicateContractClassException
。
当时用 HashAttachmentConstraint 的问题¶
When setting up a new network, it is possible to encounter errors when states are issued with the HashAttachmentConstraint
,
but not all nodes have that same version of the CorDapp installed locally.
当设置一个新的网络的时候,当 states 是由 HashAttachmentConstraint
来初始出来的话,是可能会遇到错误的,但是并不是所有的节点都在本地安装了相同版本的 CorDapp。
In this case, flows will fail with a ContractConstraintRejection
, and the failed flow will be sent to the flow hospital.
From there it’s suspended waiting to be retried on node restart.
This gives the node operator the opportunity to recover from those errors, which in the case of constraint violations means
adding the right cordapp jar to the cordapps
folder.
在这种情况下,flows 会失败并返回 ContractConstraintRejection
,失败的 flow 会被发送到 flow 意愿。在那里,它会被挂起并等待节点重启的时候被重试。这就给了节点的维护者机会来解决这些错误,如果是约束冲突的话,那么可以把正确的 CorDapp JAR 添加到 cordapps
文件夹。
在私有网络中的 Hash 约束的 states¶
Where private networks started life using CorDapps with hash constrained states, we have introduced a mechanism to relax the checking of these hash constrained states when upgrading to signed CorDapps using signature constraints.
当使用带有 hash 约束的 states 的 CorDapps 开始一个私有网络的时候,当升级使用签名约束的签过名的 CorDapps 的时候,我们引入了一个机制来把这些 hash 约束的 states 的检查变得更轻松。
The Java system property -Dnet.corda.node.disableHashConstraints="true"
may be set to relax the hash constraint checking behaviour.
可以通过设置 Java 的系统属性 -Dnet.corda.node.disableHashConstraints="true"
来把检查 hash 约束的行为变得简单。
This mode should only be used upon “out of band” agreement by all participants in a network.
这个模式应该仅仅在一个网络中的所有参与者都同意的情况下才被使用。
Please also beware that this flag should remain enabled until every hash constrained state is exited from the ledger.
也要注意这个标记应该保持开启,知道每个 hash 约束的 state 都已经从账本上消除掉。
CorDapps 作为附件¶
CorDapp JARs (see 什么是 CorDapp?) that contain classes implementing the Contract
interface are automatically
loaded into the AttachmentStorage
of a node, and made available as ContractAttachments
.
包含实现了 Contract
接口的类的 CorDapps JARs 文件(查看 什么是 CorDapp?)会被自动加载到一个节点的 AttachmentStorage
,并且作为 ContractAttachments
变得可用。
They are retrievable by hash using AttachmentStorage.openAttachment
. These JARs can either be installed on the
node or will be automatically fetched over the network when receiving a transaction.
通过使用 AttachmentStorage.openAttachment
能够根据 hash 把他们取回来。这些 JARs 能够被安装在节点上,或者在收到一个交易的时候被自动在网络上获取回来。
警告
The obvious way to write a CorDapp is to put all you states, contracts, flows and support code into a single Java module. This will work but it will effectively publish your entire app onto the ledger. That has two problems: (1) it is inefficient, and (2) it means changes to your flows or other parts of the app will be seen by the ledger as a “new app”, which may end up requiring essentially unnecessary upgrade procedures. It’s better to split your app into multiple modules: one which contains just states, contracts and core data types. And another which contains the rest of the app. See 模块.
警告
一种简单的编写一个 CorDapp 的方式是将所有的 states,contracts,flows 和支持的代码都放在同一个 Java module 中。这个可以工作但是它也会将你整个 app 发布到账本上去。这会有两个问题:(1) 它不是有效率的,并且(2) 它意味着对于 flows 或者 app 其他部分的改动会在账本中作为一个“新 app”被看到,这个可能会以要求一个没有必要的升级流程而终止。将你的 app 分别放到多个 modules 中是一个更好的方式:一个 module 仅仅包含 states,contracts 和核心的数据类型。另一个 module 包含 app 剩下的部分。查看 模块。
约束传递¶
As was mentioned above, the TransactionBuilder
API gives the CorDapp developer or even malicious node owner the possibility
to construct output states with a constraint of his choosing.
向上边讲到的,TransactionBuilder
API 为 CorDapp 开发者以及不同的节点所有者一个可能性来使用他们选择的约束来构建 output states。
For the ledger to remain in a consistent state, the expected behavior is for output state to inherit the constraints of input states.
This guarantees that for example, a transaction can’t output a state with the AlwaysAcceptAttachmentConstraint
when the
corresponding input state was the SignatureAttachmentConstraint
. Translated, this means that if this rule is enforced, it ensures
that the output state will be spent under similar conditions as it was created.
为了使账本能够保持在一个一致的 state,期望的行为是对于 output state,应该继承 input states 的约束。这个能够保证比如,当对应的 input state 是 SignatureAttachmentConstraint
的时候,一个交易是不能够产生一个 AlwaysAcceptAttachmentConstraint
的 output state 的。也就是说,如果这个规则被强制,它就能够确保 output state 将会按照它被创建的时候相同的条件来被消费掉。
Before version 4, the constraint propagation logic was expected to be enforced in the contract verify code, as it has access to the entire Transaction.
在 4.0 版本之前,约束的传递逻辑是在 contract verify 代码中被强制执行的,因为它能够访问整个交易。
Starting with version 4 of Corda the constraint propagation logic has been implemented and enforced directly by the platform,
unless disabled by putting @NoConstraintPropagation
on the Contract
class which reverts to the previous behavior of expecting
apps to do this.
从 4.0 版本开始,约束的传递逻辑被平台实现并且强制执行,除非通过将 @NoConstraintPropagation
添加到 Contract
类上来把它变为无效,这就像恢复到了以前所期待的那样的行为。
For contracts that are not annotated with @NoConstraintPropagation
, the platform implements a fairly simple constraint transition policy
to ensure security and also allow the possibility to transition to the new SignatureAttachmentConstraint
.
对于没有 @NoConstraintPropagation
标签的 contracts,平台实现了一个非常简单的约束交易策略来确保安全并且也能够过度到新的 SignatureAttachmentConstraint
。
During transaction building the AutomaticPlaceholderConstraint
for output states will be resolved and the best contract attachment versions
will be selected based on a variety of factors so that the above holds true. If it can’t find attachments in storage or there are no
possible constraints, the TransactionBuilder
will throw an exception.
当交易在为 output states 构建 AutomaticPlaceholderConstraint
的过程中,最适合的 contract 附件版本会根据不同的考虑被选择已达到上边所说的。如果它无法在存储中找到附件,或者这里没有可用的约束,TransactionBuilder
将会抛出一个异常。
迁移约束到 Corda 4¶
Please read CorDapp constraints migration to understand how to consume and evolve pre-Corda 4 issued hash or CZ whitelisted constrained states using a Corda 4 signed CorDapp (using signature constraints).
请阅读 CorDapp constraints migration 来理解如何使用一个 Corda 4 签过名的 CorDapp(使用签名约束)来消费并且更新 4.0 之前版本的 Corda 生成的 hash 或者 CZ 白名单 约束过的 states。
Debugging¶
If an attachment constraint cannot be resolved, a MissingContractAttachments
exception is thrown. There are three common sources of
MissingContractAttachments
exceptions:
如果一个附件的约束无法解决的话,一个 MissingContractAttachments
的异常会被抛出。有三种常见的 MissingContractAttachments
异常的 source:
在测试中没有设置 CorDapp 包¶
You are running a test and have not specified the CorDapp packages to scan.
When using MockNetwork
ensure you have provided a package containing the contract class in MockNetworkParameters
. See API: 测试.
你在运行一个测试并且没有指定要扫描的 CorDapp 的包。当使用 MockNetwork
的时候,确保你提供了一个在 MockNetworkParameters
中包含 contract 类的包。查看 API: 测试。
Similarly package names need to be provided when testing using DriverDSl
. DriverParameters
has a property cordappsForAllNodes
(Kotlin)
or method withCordappsForAllNodes
in Java. Pass the collection of TestCordapp
created by utility method TestCordapp.findCordapp(String)
.
当使用 DriverDSl
进行测试的时候,类似的包名也需要被提供。DriverParameters
有一个 cordappsForAllNodes
的属性 (Kotlin) 或者在 Java 中是 withCordappsForAllNodes
方法。将由 utility 方法 TestCordapp.findCordapp(String)
创建的 TestCordapp
集合传递过去。
Example of creation of two Cordapps with Finance App Flows and Finance App Contracts in Kotlin:
使用 Kotlin 创建两个带有 Finance App Flows 和 Finance App Contracts 的 CorDapps 的例子:
Driver.driver(DriverParameters(cordappsForAllNodes = listOf(TestCordapp.findCordapp("net.corda.finance.schemas"), TestCordapp.findCordapp("net.corda.finance.flows"))) { // Your test code goes here })
The same example in Java:
在 Java 中相同的例子:
Driver.driver(new DriverParameters() .withCordappsForAllNodes(Arrays.asList(TestCordapp.findCordapp("net.corda.finance.schemas"), TestCordapp.findCordapp("net.corda.finance.flows"))), dsl -> { // Your test code goes here });
没有 CorDapp(s) 启动节点¶
When running the Corda node ensure all CordDapp JARs are placed in cordapps
directory of each node.
By default Gradle Cordform task deployNodes
copies all JARs if CorDapps to deploy are specified.
See 创建本地节点 for detailed instructions.
当运行 Corda 节点的时候,要确保所有的 CorDapp JARs 被放在每个节点的 cordapps
路径下。默认地,如果将要部署的 CorDapps 被指定,Gradle Cordform 任务 deployNodes
会拷贝所有的 JARs。查看 创建本地节点 了解详细信息。
错误的 full-qualified 的 contract 名¶
You are specifying the fully-qualified name of the contract incorrectly. For example, you’ve defined MyContract
in
the package com.mycompany.myapp.contracts
, but the fully-qualified contract name you pass to the
TransactionBuilder
is com.mycompany.myapp.MyContract
(instead of com.mycompany.myapp.contracts.MyContract
).
你没有为 contract 指定一个 full-qualified 的名字。例如,你在 com.mycompany.myapp.contracts
这个包中定义了 MyContract
,但是你给 TransactionBuilder
传递的 fully-qualified 合约名称是 com.mycompany.myapp.MyContract``(而不是 ``com.mycompany.myapp.contracts.MyContract
)。
API: Vault Query¶
概览¶
Corda has been architected from the ground up to encourage usage of industry standard, proven query frameworks and libraries for accessing RDBMS backed transactional stores (including the Vault).
Corda 从最底层的架构开始一直都在推崇使用业界标准的,经过考验的查询框架和类库来访问存储 transaction 数据的 RDBMS 后台(包括 Vault)。
Corda provides a number of flexible query mechanisms for accessing the Vault:
- Vault Query API
- Using a JDBC session (as described in Persistence)
- Custom JPA/JPQL queries
- Custom 3rd party Data Access frameworks such as Spring Data
Corda 提供了一系列的灵活的查询机制来访问 Vault:
- Vault Query API
- 使用 JDBC session(像 Persistence 描述的那样)
- 自定义 JPA/JPQL 查询
- 自定义第三方的数据访问框架,比如 Spring Data
The majority of query requirements can be satisfied by using the Vault Query API, which is exposed via the
VaultService
for use directly by flows:
大多数的查询需求都能够通过使用 Vault Query API 来满足,该 API 是通过 VaultService
暴露出来的,可以被 flow 直接使用:
/**
* Generic vault query function which takes a [QueryCriteria] object to define filters,
* optional [PageSpecification] and optional [Sort] modification criteria (default unsorted),
* and returns a [Vault.Page] object containing the following:
* 1. states as a List of <StateAndRef> (page number and size defined by [PageSpecification])
* 2. states metadata as a List of [Vault.StateMetadata] held in the Vault States table.
* 3. total number of results available if [PageSpecification] supplied (otherwise returns -1).
* 4. status types used in this query: [StateStatus.UNCONSUMED], [StateStatus.CONSUMED], [StateStatus.ALL].
* 5. other results (aggregate functions with/without using value groups).
*
* @throws VaultQueryException if the query cannot be executed for any reason
* (missing criteria or parsing error, paging errors, unsupported query, underlying database error).
*
* Notes
* If no [PageSpecification] is provided, a maximum of [DEFAULT_PAGE_SIZE] results will be returned.
* API users must specify a [PageSpecification] if they are expecting more than [DEFAULT_PAGE_SIZE] results,
* otherwise a [VaultQueryException] will be thrown alerting to this condition.
* It is the responsibility of the API user to request further pages and/or specify a more suitable [PageSpecification].
*/
@Throws(VaultQueryException::class)
fun <T : ContractState> _queryBy(criteria: QueryCriteria,
paging: PageSpecification,
sorting: Sort,
contractStateType: Class<out T>): Vault.Page<T>
/**
* Generic vault query function which takes a [QueryCriteria] object to define filters,
* optional [PageSpecification] and optional [Sort] modification criteria (default unsorted),
* and returns a [DataFeed] object containing:
* 1) a snapshot as a [Vault.Page] (described previously in [queryBy]).
* 2) an [Observable] of [Vault.Update].
*
* @throws VaultQueryException if the query cannot be executed for any reason.
*
* Notes:
* - The snapshot part of the query adheres to the same behaviour as the [queryBy] function.
* - The update part of the query currently only supports query criteria filtering by contract
* type(s) and state status(es). CID-731 <https://r3-cev.atlassian.net/browse/CID-731> proposes
* adding the complete set of [QueryCriteria] filtering.
*/
@Throws(VaultQueryException::class)
fun <T : ContractState> _trackBy(criteria: QueryCriteria,
paging: PageSpecification,
sorting: Sort,
contractStateType: Class<out T>): DataFeed<Vault.Page<T>, Vault.Update<T>>
And via CordaRPCOps
for use by RPC client applications:
并且 RPC 客户端应用程序可以通过 CordaRPCOps
来使用
@RPCReturnsObservables
fun <T : ContractState> vaultQueryBy(criteria: QueryCriteria,
paging: PageSpecification,
sorting: Sort,
contractStateType: Class<out T>): Vault.Page<T>
@RPCReturnsObservables
fun <T : ContractState> vaultTrackBy(criteria: QueryCriteria,
paging: PageSpecification,
sorting: Sort,
contractStateType: Class<out T>): DataFeed<Vault.Page<T>, Vault.Update<T>>
Helper methods are also provided with default values for arguments:
Helper 方法也被提供,包括参数的默认值:
fun <T : ContractState> vaultQuery(contractStateType: Class<out T>): Vault.Page<T>
fun <T : ContractState> vaultQueryByCriteria(criteria: QueryCriteria, contractStateType: Class<out T>): Vault.Page<T>
fun <T : ContractState> vaultQueryByWithPagingSpec(contractStateType: Class<out T>, criteria: QueryCriteria, paging: PageSpecification): Vault.Page<T>
fun <T : ContractState> vaultQueryByWithSorting(contractStateType: Class<out T>, criteria: QueryCriteria, sorting: Sort): Vault.Page<T>
fun <T : ContractState> vaultTrack(contractStateType: Class<out T>): DataFeed<Vault.Page<T>, Vault.Update<T>>
fun <T : ContractState> vaultTrackByCriteria(contractStateType: Class<out T>, criteria: QueryCriteria): DataFeed<Vault.Page<T>, Vault.Update<T>>
fun <T : ContractState> vaultTrackByWithPagingSpec(contractStateType: Class<out T>, criteria: QueryCriteria, paging: PageSpecification): DataFeed<Vault.Page<T>, Vault.Update<T>>
fun <T : ContractState> vaultTrackByWithSorting(contractStateType: Class<out T>, criteria: QueryCriteria, sorting: Sort): DataFeed<Vault.Page<T>, Vault.Update<T>>
The API provides both static (snapshot) and dynamic (snapshot with streaming updates) methods for a defined set of filter criteria:
- Use
queryBy
to obtain a current snapshot of data (for a givenQueryCriteria
) - Use
trackBy
to obtain both a current snapshot and a future stream of updates (for a givenQueryCriteria
)
API 提供了静态的(snapshot)和动态的(包含 steaming updates 的 snapshot)方法来定义一系列的过滤条件:
- 使用
queryBy
来获得数据当前的 snapshot(对于一个给定的QueryCriteria
) - 使用
trackBy
来获得既包括当前的 snapshot 也包括将来的更新流(对于一个给定的QueryCriteria
)
注解
Streaming updates are only filtered based on contract type and state status (UNCONSUMED, CONSUMED, ALL). They will not respect any other criteria that the initial query has been filtered by.
注解
更新流只能够基于 contract type 和 state status(UNCONSUMED, CONSUMED, ALL)来进行过滤。除了初始查询时候的过滤条件,他们不能够使用其他的条件。
Simple pagination (page number and size) and sorting (directional ordering using standard or custom property attributes) is also specifiable. Defaults are defined for paging (pageNumber = 1, pageSize = 200) and sorting (direction = ASC).
也可以指定简单的分页(页码和每页包含记录数)和排序(使用标准或者自定义的属性值进行排序)。分页的默认值为(pageNumber = 1(显示第一页), pageSize = 200(每页200条记录)),排序的默认值为(direction = ASC(升序))。
The QueryCriteria
interface provides a flexible mechanism for specifying different filtering criteria, including
and/or composition and a rich set of operators to include:
QueryCriteria
接口提供了灵活的机制来指定不同的过滤条件,包括 and/or 组合和一系列的操作符,包括:
- Binary logical (AND, OR)
- Comparison (LESS_THAN, LESS_THAN_OR_EQUAL, GREATER_THAN, GREATER_THAN_OR_EQUAL)
- Equality (EQUAL, NOT_EQUAL)
- Likeness (LIKE, NOT_LIKE)
- Nullability (IS_NULL, NOT_NULL)
- Collection based (IN, NOT_IN)
- Standard SQL-92 aggregate functions (SUM, AVG, MIN, MAX, COUNT)
There are four implementations of this interface which can be chained together to define advanced filters.
这里有四种对于该接口的实现,可以结合使用他们来定义高级的过滤条件。
VaultQueryCriteria
provides filterable criteria on attributes within the Vault states table: status (UNCONSUMED, CONSUMED), state reference(s), contract state type(s), notaries, soft locked states, timestamps (RECORDED, CONSUMED), state constraints (see Constraint Types), relevancy (ALL, RELEVANT, NON_RELEVANT).VaultQueryCriteria
提供了针对于 Vault states 表的属性值的可过滤条件:status (UNCONSUMED, CONSUMED), state reference(s), contract state type(s), notaries, soft locked states, timestamps (RECORDED, CONSUMED), state constraints (see Constraint Types), relevancy (ALL, RELEVANT, NON_RELEVANT)。注解
Sensible defaults are defined for frequently used attributes (status = UNCONSUMED, always include soft locked states).
注解
对于经常使用的属性,已经定义了一些显著的默认值(status = UNCONSUMED, 总会包含 soft locked states)
FungibleAssetQueryCriteria
provides filterable criteria on attributes defined in the Corda CoreFungibleAsset
contract state interface, used to represent assets that are fungible, countable and issued by a specific party (eg.Cash.State
andCommodityContract.State
in the Corda finance module). Filterable attributes include: participants(s), owner(s), quantity, issuer party(s) and issuer reference(s).FungibleAssetQueryCriteria
提供了针对于在 Corda 核心的FungibleAsset
contract state 接口中定义的属性的可过滤条件,用来展示资产是可替换的,可计数的并且被指定的机构发行(比如在 Corda finance module 中的Cash.State
和CommondityContract.State
)。可过滤条件包括:participants(s), owner(s), quantity, issuer party(s) and issuer reference(s)。注解
All contract states that extend the
FungibleAsset
now automatically persist that interfaces common state attributes to the vault_fungible_states table.注解
所有扩展了
FungibleAsset
的 contract states 会自动将该接口中的常规 state 属性存储到 vault_fungible_states 表中。LinearStateQueryCriteria
provides filterable criteria on attributes defined in the Corda CoreLinearState
andDealState
contract state interfaces, used to represent entities that continuously supersede themselves, all of which share the samelinearId
(e.g. trade entity states such as theIRSState
defined in the SIMM valuation demo). Filterable attributes include: participant(s), linearId(s), uuid(s), and externalId(s).LinearStateQueryCriteria
提供了针对于在 Corda 核心的LinearState
和DealState
中定义的属性的可过滤条件,用来展示那些一直在取代/替换自己的实体,所有的实体都包含一个相同的linearId``(比如像 SIMM valuation demo 中的 trade 实体 states,比如 ``IRSState
)。可过滤的条件包括:participant(s), linearId(s), dealRef(s)。注解
All contract states that extend
LinearState
orDealState
now automatically persist those interfaces common state attributes to the vault_linear_states table.注解
所有扩展自
LinearState
或者DealState
的 contract states 会自动地将该接口中常用的 state 属性存储到 vault_linear_states 表中。VaultCustomQueryCriteria
provides the means to specify one or many arbitrary expressions on attributes defined by a custom contract state that implements its own schema as described in the Persistence documentation and associated examples. Custom criteria expressions are expressed using one of several type-safeCriteriaExpression
: BinaryLogical, Not, ColumnPredicateExpression, AggregateFunctionExpression. TheColumnPredicateExpression
allows for specification arbitrary criteria using the previously enumerated operator types. TheAggregateFunctionExpression
allows for the specification of an aggregate function type (sum, avg, max, min, count) with optional grouping and sorting. Furthermore, a rich DSL is provided to enable simple construction of custom criteria using any combination ofColumnPredicate
. See theBuilder
object inQueryCriteriaUtils
for a complete specification of the DSL.VaultCustomQeuryCriteria
提供了一种方式,来指定一个或者多个基于自定义的 contract state 的属性的任意表达式,这个自定义的 contract state 实现了像 持久化 文档中描述的自己的 schema。自定义的条件表达式通过使用一个或者多个 type-safe 的CriteriaExpression
:BinaryLogical, Not, ColumnPredicateExpression。ColumnPredicateExpression
允许使用前边罗列的操作符类型来指定任意的条件。AggregateFunctionExpression
允许指定聚合方法类型(sum, avg, max, min, count)并可以带有可选的 grouping 和 sorting。更进一步地讲,一个丰富的 DSL 被提供来使得用任何的ColumnPredicate
组合来简单地构建一个自定义的查询条件成为可能。查看在QueryCriteriaUtils
里的Builder
对象来看到 DSL 的完整描述。注解
Custom contract schemas are automatically registered upon node startup for CorDapps. Please refer to Persistence for mechanisms of registering custom schemas for different testing purposes.
注解
自定义的 contract schemas 会在节点启动的时候被自动注册。可以参考 持久化 来了解对于不同的测试目的自定义 schemas 注册的机制。
All QueryCriteria
implementations are composable using and
and or
operators.
所有的 QueryCriteria
实现都可以使用 and
和 or
操作符。
All QueryCriteria
implementations provide an explicitly specifiable set of common attributes:
- State status attribute (
Vault.StateStatus
), which defaults to filtering on UNCONSUMED states. When chaining several criteria using AND / OR, the last value of this attribute will override any previous - Contract state types (
<Set<Class<out ContractState>>
), which will contain at minimum one type (by default this will beContractState
which resolves to all state types). When chaining several criteria usingand
andor
operators, all specified contract state types are combined into a single set
所有的 QueryCriteria
实现提供了一套可显式指定的常用属性:
- State 状态属性(
Vault.StateStatus
),默认值是 UNCONSUMED states。当使用 AND/OR 定义多个条件的时候,这个属性的最后一个值会覆盖之前的所有值 - Contract state 类型(
<Set<Class<out ContractState>>
),它至少包含一个类型(默认的会是满足所有 state 类型的ContractState
)。当使用and
和or
定义多个条件的时候,所有指定的 contract state types 会被合并为一套
An example of a custom query is illustrated here:
下边是一个自定义查询的演示实例:
val generalCriteria = VaultQueryCriteria(Vault.StateStatus.ALL)
val results = builder {
val currencyIndex = PersistentCashState::currency.equal(USD.currencyCode)
val quantityIndex = PersistentCashState::pennies.greaterThanOrEqual(10L)
val customCriteria1 = VaultCustomQueryCriteria(currencyIndex)
val customCriteria2 = VaultCustomQueryCriteria(quantityIndex)
val criteria = generalCriteria.and(customCriteria1.and(customCriteria2))
vaultService.queryBy<Cash.State>(criteria)
}
注解
Custom contract states that implement the Queryable
interface may now extend common schemas types
FungiblePersistentState
or, LinearPersistentState
. Previously, all custom contracts extended the root
PersistentState
class and defined repeated mappings of FungibleAsset
and LinearState
attributes. See
SampleCashSchemaV2
and DummyLinearStateSchemaV2
as examples.
注解
自定义的实现了 Queryable
接口的 contract states 现在可能扩展了常规的 schemas types FungiblePersistentState
或者 LinearPersistentState
。以前,所有的自定义 contracts 扩展了根 PersistentState
类并且定义了对于 FungibleAsset
和 LinearState
属性的重复 mappings。参考例子 SampleCashSchemaV2
和 DummyLinearStateSchemaV2
。
Examples of these QueryCriteria
objects are presented below for Kotlin and Java.
下边会有这些 QueryCriteria
对象的使用例子。
注解
When specifying the ContractType
as a parameterised type to the QueryCriteria
in Kotlin, queries now
include all concrete implementations of that type if this is an interface. Previously, it was only possible to query
on concrete types (or the universe of all ContractState
).
注解
当在 Kotlin 中将指定的 ContractType
作为传给`` QueryCriteria`` 的参数类型的时候,如果这是一个 interface 的话,那么查询会包括所有的 concrete 实现。在以前,只能够基于 concrete 类型进行查询(或者所有的 ContractState
)。
The Vault Query API leverages the rich semantics of the underlying JPA Hibernate based Persistence framework adopted by Corda.
Vault Query API 使用了底层 JPA Hibernate 的丰富的语义,JPA Hibernate 是基于 Corda 所采用的基于 持久化 的框架的。
注解
Permissioning at the database level will be enforced at a later date to ensure authenticated, role-based, read-only access to underlying Corda tables.
注解
对于数据库 level 的权限控制会在今后被强制执行,确保经过验证的,基于角色的和只读的访问权限来控制 Corda 的表。
注解
API’s now provide ease of use calling semantics from both Java and Kotlin. However, it should be noted that Java custom queries are significantly more verbose due to the use of reflection fields to reference schema attribute types.
注解
现在 API 提供了非常简单的方式通过 Java 和 Kotlin 来使用 semantics。然而,我们应该注意到 Java 的 查询更加的繁琐因为使用了反射字段来引用 schema 属性类型。
An example of a custom query in Java is illustrated here:
在 Java 中创建一个自定义查询的例子:
QueryCriteria generalCriteria = new VaultQueryCriteria(Vault.StateStatus.ALL);
FieldInfo attributeCurrency = getField("currency", CashSchemaV1.PersistentCashState.class);
FieldInfo attributeQuantity = getField("pennies", CashSchemaV1.PersistentCashState.class);
CriteriaExpression currencyIndex = Builder.equal(attributeCurrency, "USD");
CriteriaExpression quantityIndex = Builder.greaterThanOrEqual(attributeQuantity, 10L);
QueryCriteria customCriteria2 = new VaultCustomQueryCriteria(quantityIndex);
QueryCriteria customCriteria1 = new VaultCustomQueryCriteria(currencyIndex);
QueryCriteria criteria = generalCriteria.and(customCriteria1).and(customCriteria2);
Vault.Page<ContractState> results = vaultService.queryBy(Cash.State.class, criteria);
注解
Queries by Party
specify the AbstractParty
which may be concrete or anonymous. In the later case,
where an anonymous party does not resolve to an X500 name via the IdentityService
, no query results will ever be
produced. For performance reasons, queries do not use PublicKey
as search criteria.
注解
当前这个根据 Party
的查询指定了 AbstractParty
,该 AbstractParty 可能是具体的或者是匿名的。如果是匿名的,当一个匿名的参与方无法通过 IdentityService
来获得一个指定的 X500Name 的话,将不会返回任何结果。基于效率的原因,查询不会使用 PublicKey
作为查询条件。
Custom queries can be either case sensitive or case insensitive. They are defined via a Boolean
as one of the function parameters of each operator function. By default each operator is case sensitive.
自定义查询可以是区分大小写的,也可以是不区分的。这个是通过每个操作符的一个 Boolean
类型的方法参数来定义的。默认的,每个操作符是区分大小写的。
An example of a case sensitive custom query operator is illustrated here:
下边是一个区分大小写的自定义查询的例子:
val currencyIndex = PersistentCashState::currency.equal(USD.currencyCode, true)
注解
The Boolean
input of true
in this example could be removed since the function will default to true
when not provided.
注解
这个例子中的值为 true
的 Boolean
类型的 input 可以被移除,因为如果没有提供这个值的话,这个方法默认就是 true
。
An example of a case insensitive custom query operator is illustrated here:
下边是一个不区分大小写的自定义查询的例子:
val currencyIndex = PersistentCashState::currency.equal(USD.currencyCode, false)
An example of a case sensitive custom query operator in Java is illustrated here:
下边是一个 Java 中区分大小写的自定义查询的例子:
FieldInfo attributeCurrency = getField("currency", CashSchemaV1.PersistentCashState.class);
CriteriaExpression currencyIndex = Builder.equal(attributeCurrency, "USD", true);
An example of a case insensitive custom query operator in Java is illustrated here:
下边是 Java 中一个不区分大小写的自定义查询的例子:
FieldInfo attributeCurrency = getField("currency", CashSchemaV1.PersistentCashState.class);
CriteriaExpression currencyIndex = Builder.equal(attributeCurrency, "USD", false);
分页¶
The API provides support for paging where large numbers of results are expected (by default, a page size is set to 200
results). Defining a sensible default page size enables efficient checkpointing within flows, and frees the developer
from worrying about pagination where result sets are expected to be constrained to 200 or fewer entries. Where large
result sets are expected (such as using the RPC API for reporting and/or UI display), it is strongly recommended to
define a PageSpecification
to correctly process results with efficient memory utilisation. A fail-fast mode is in
place to alert API users to the need for pagination where a single query returns more than 200 results and no
PageSpecification
has been supplied.
API 提供了分页的支持来应对可能返回大批量数据的情况(默认会返回200条记录)。定义一个合理的分页数量的默认值能够让 flows 中的 checkpointing 更有效率,并且开发人员不用再去担心当结果集包含200条或者更少的记录的时候要怎么去进行分页。当大批量的结果可能返回的时候(比如使用 RPC API 做报表并且要现在 UI 上的情况),我们强烈地建议定义一个 PageSpecification
来有效使用内存来正确的执行查询并获得结果。当返回结果超过 200 条记录但是 PageSpecification
没有被指定的时候,这里已经存在一个 fail-fast 模型会来提醒 API 用户需要去进行分页。
Here’s a query that extracts every unconsumed ContractState
from the vault in pages of size 200, starting from the
default page number (page one):
下边的查询会从账本中查询所有 unconsumed ContractState
, 每页会包含 200 条记录,返回的是默认的页数(第一页)
val vaultSnapshot = proxy.vaultQueryBy<ContractState>(
QueryCriteria.VaultQueryCriteria(Vault.StateStatus.UNCONSUMED),
PageSpecification(DEFAULT_PAGE_NUM, 200))
注解
A pages maximum size MAX_PAGE_SIZE
is defined as Int.MAX_VALUE
and should be used with extreme
caution as results returned may exceed your JVM’s memory footprint.
注解
每页最多显示多少条记录 MAX_PAGE_SIZE
会被定义为 Int.MAX_VALUE
,并且你需要很小心的使用它,因为返回的结果可能会超出你的 JVM 的内存 footprint。
使用样例¶
Kotlin¶
General snapshot queries using VaultQueryCriteria
:
使用 VaultQueryCriteria
的 常规 snapshot 查询
Query for all unconsumed states (simplest query possible):
查询所有的 unconsumed states(可能是最简单的一个查询):
val result = vaultService.queryBy<ContractState>()
/**
* Query result returns a [Vault.Page] which contains:
* 1) actual states as a list of [StateAndRef]
* 2) state reference and associated vault metadata as a list of [Vault.StateMetadata]
* 3) [PageSpecification] used to delimit the size of items returned in the result set (defaults to [DEFAULT_PAGE_SIZE])
* 4) Total number of items available (to aid further pagination if required)
*/
val states = result.states
val metadata = result.statesMetadata
Query for unconsumed states for some state references:
对于一些 state 引用的 unconsumed states 查询:
val sortAttribute = SortAttribute.Standard(Sort.CommonStateAttribute.STATE_REF_TXN_ID)
val criteria = VaultQueryCriteria(stateRefs = listOf(stateRefs.first(), stateRefs.last()))
val results = vaultService.queryBy<DummyLinearContract.State>(criteria, Sort(setOf(Sort.SortColumn(sortAttribute, Sort.Direction.ASC))))
Query for unconsumed states for several contract state types:
对于一些 contract state 类型的 unconsumed states 查询:
val criteria = VaultQueryCriteria(contractStateTypes = setOf(Cash.State::class.java, DealState::class.java))
val results = vaultService.queryBy<ContractState>(criteria)
Query for unconsumed states for specified contract state constraint types and sorted in ascending alphabetical order:
对于一些指定的 contract state 约束类型的 unconsumed states 的查询,按照字母升序排序:
val constraintTypeCriteria = VaultQueryCriteria(constraintTypes = setOf(HASH, CZ_WHITELISTED))
val sortAttribute = SortAttribute.Standard(Sort.VaultStateAttribute.CONSTRAINT_TYPE)
val sorter = Sort(setOf(Sort.SortColumn(sortAttribute, Sort.Direction.ASC)))
val constraintResults = vaultService.queryBy<LinearState>(constraintTypeCriteria, sorter)
Query for unconsumed states for specified contract state constraints (type and data):
对于一些指定的 contract state 约束(类型和数据)的 unconsumed states 的查询:
val constraintCriteria = VaultQueryCriteria(constraints = setOf(Vault.ConstraintInfo(constraintSignature),
Vault.ConstraintInfo(constraintSignatureCompositeKey), Vault.ConstraintInfo(constraintHash)))
val constraintResults = vaultService.queryBy<LinearState>(constraintCriteria)
Query for unconsumed states for a given notary:
对于指定的一个 notary 的 unconsumed states 查询:
val criteria = VaultQueryCriteria(notary = listOf(CASH_NOTARY))
val results = vaultService.queryBy<ContractState>(criteria)
Query for unconsumed states for a given set of participants:
对于指定的一系列 participants 的 unconsumed states 的查询:
val criteria = LinearStateQueryCriteria(participants = listOf(BIG_CORP, MINI_CORP))
val results = vaultService.queryBy<ContractState>(criteria)
Query for unconsumed states recorded between two time intervals:
对于指定时间区间内的 unconsumed states 记录的查询:
val start = TODAY
val end = TODAY.plus(30, ChronoUnit.DAYS)
val recordedBetweenExpression = TimeCondition(
QueryCriteria.TimeInstantType.RECORDED,
ColumnPredicate.Between(start, end))
val criteria = VaultQueryCriteria(timeCondition = recordedBetweenExpression)
val results = vaultService.queryBy<ContractState>(criteria)
注解
This example illustrates usage of a Between
ColumnPredicate
.
注解
上边的例子演示了如何使用 Between
ColumnPredicate
。
Query for all states with pagination specification (10 results per page):
查询所有的 states 并且使用指定的分页条件(每页 10 条记录):
val pagingSpec = PageSpecification(DEFAULT_PAGE_NUM, 10)
val criteria = VaultQueryCriteria(status = Vault.StateStatus.ALL)
val results = vaultService.queryBy<ContractState>(criteria, paging = pagingSpec)
注解
The result set metadata field totalStatesAvailable allows you to further paginate accordingly as demonstrated in the following example.
注解
结果集的 totalStatesAvailable metadata 字段允许你可以像下边的例子那样进行进一步的分页。
Query for all states using a pagination specification and iterate using the totalStatesAvailable field until no further pages available:
使用 pagination specification 和 iterate,使用 totalStatesAvailable 字段查询所有的 states 直到最后一页:
var pageNumber = DEFAULT_PAGE_NUM
val states = mutableListOf<StateAndRef<ContractState>>()
do {
val pageSpec = PageSpecification(pageNumber = pageNumber, pageSize = pageSize)
val results = vaultService.queryBy<ContractState>(VaultQueryCriteria(), pageSpec)
states.addAll(results.states)
pageNumber++
} while ((pageSpec.pageSize * (pageNumber - 1)) <= results.totalStatesAvailable)
Query for only relevant states in the vault:
查询 vault 中的相关的 states:
val relevancyAllCriteria = VaultQueryCriteria(relevancyStatus = Vault.RelevancyStatus.RELEVANT)
val allDealStateCount = vaultService.queryBy<DummyDealContract.State>(relevancyAllCriteria).states
LinearState and DealState queries using LinearStateQueryCriteria
:
LinearState 和 DealState 查询应该使用 LinearStateQueryCriteria
:
Query for unconsumed linear states for given linear ids:
对于指定的 linear ids 的 unconsumed linear states 的查询:
val linearIds = issuedStates.states.map { it.state.data.linearId }.toList()
val criteria = LinearStateQueryCriteria(linearId = listOf(linearIds.first(), linearIds.last()))
val results = vaultService.queryBy<LinearState>(criteria)
Query for all linear states associated with a linear id:
对于指定的 linear id 的所有 linear states 的查询:
val linearStateCriteria = LinearStateQueryCriteria(linearId = listOf(linearId), status = Vault.StateStatus.ALL)
val vaultCriteria = VaultQueryCriteria(status = Vault.StateStatus.ALL)
val results = vaultService.queryBy<LinearState>(linearStateCriteria and vaultCriteria)
Query for unconsumed deal states with deals references:
对于指定的 deal references 的 unconsumed deal states 的查询:
val criteria = LinearStateQueryCriteria(externalId = listOf("456", "789"))
val results = vaultService.queryBy<DealState>(criteria)
Query for unconsumed deal states with deals parties:
对于指定的 deals parties 的 unconsumed deal states 的查询:
val criteria = LinearStateQueryCriteria(participants = parties)
val results = vaultService.queryBy<DealState>(criteria)
Query for only relevant linear states in the vault:
仅对于 vault 中的相关的 linear states 进行查询:
val allLinearStateCriteria = LinearStateQueryCriteria(relevancyStatus = Vault.RelevancyStatus.RELEVANT)
val allLinearStates = vaultService.queryBy<DummyLinearContract.State>(allLinearStateCriteria).states
FungibleAsset and DealState queries using FungibleAssetQueryCriteria
:
FungibleAsset 和 DealAsset 查询应该使用 FungibleAssetQueryCriteria
。
Query for fungible assets for a given currency:
对于指定的 currency 的 fungible assets 的查询:
val ccyIndex = builder { CashSchemaV1.PersistentCashState::currency.equal(USD.currencyCode) }
val criteria = VaultCustomQueryCriteria(ccyIndex)
val results = vaultService.queryBy<FungibleAsset<*>>(criteria)
Query for fungible assets for a minimum quantity:
对于最小数量的 fungible assets 的查询:
val fungibleAssetCriteria = FungibleAssetQueryCriteria(quantity = builder { greaterThan(2500L) })
val results = vaultService.queryBy<Cash.State>(fungibleAssetCriteria)
注解
This example uses the builder DSL.
注解
这个例子使用了 builder DSL。
Query for fungible assets for a specific issuer party:
对于指定的 issuer party 的 fungible assets 的查询:
val criteria = FungibleAssetQueryCriteria(issuer = listOf(BOC))
val results = vaultService.queryBy<FungibleAsset<*>>(criteria)
Query for only relevant fungible states in the vault:
仅对于 vault 中的相关的 fungible states 进行查询:
val allCashCriteria = FungibleStateQueryCriteria(relevancyStatus = Vault.RelevancyStatus.RELEVANT)
val allCashStates = vaultService.queryBy<Cash.State>(allCashCriteria).states
Aggregate Function queries using VaultCustomQueryCriteria
:
使用 VaultCustomQueryCriteria
来实现**聚合查询**:
注解
Query results for aggregate functions are contained in the otherResults
attribute of a results Page.
注解
聚合查询的查询结果会被包含在结果页的 otherResults
属性中。
Aggregations on cash using various functions:
使用多种方法对 cash 进行聚合查询:
val sum = builder { CashSchemaV1.PersistentCashState::pennies.sum() }
val sumCriteria = VaultCustomQueryCriteria(sum)
val count = builder { CashSchemaV1.PersistentCashState::pennies.count() }
val countCriteria = VaultCustomQueryCriteria(count)
val max = builder { CashSchemaV1.PersistentCashState::pennies.max() }
val maxCriteria = VaultCustomQueryCriteria(max)
val min = builder { CashSchemaV1.PersistentCashState::pennies.min() }
val minCriteria = VaultCustomQueryCriteria(min)
val avg = builder { CashSchemaV1.PersistentCashState::pennies.avg() }
val avgCriteria = VaultCustomQueryCriteria(avg)
val results = vaultService.queryBy<FungibleAsset<*>>(sumCriteria
.and(countCriteria)
.and(maxCriteria)
.and(minCriteria)
.and(avgCriteria))
注解
otherResults
will contain 5 items, one per calculated aggregate function.
注解
otherResults
会包含5个 items,每个聚合方法一个。
Aggregations on cash grouped by currency for various functions:
使用多种方法按照 currency 进行分组对 cash 进行聚合查询:
val sum = builder { CashSchemaV1.PersistentCashState::pennies.sum(groupByColumns = listOf(CashSchemaV1.PersistentCashState::currency)) }
val sumCriteria = VaultCustomQueryCriteria(sum)
val max = builder { CashSchemaV1.PersistentCashState::pennies.max(groupByColumns = listOf(CashSchemaV1.PersistentCashState::currency)) }
val maxCriteria = VaultCustomQueryCriteria(max)
val min = builder { CashSchemaV1.PersistentCashState::pennies.min(groupByColumns = listOf(CashSchemaV1.PersistentCashState::currency)) }
val minCriteria = VaultCustomQueryCriteria(min)
val avg = builder { CashSchemaV1.PersistentCashState::pennies.avg(groupByColumns = listOf(CashSchemaV1.PersistentCashState::currency)) }
val avgCriteria = VaultCustomQueryCriteria(avg)
val results = vaultService.queryBy<FungibleAsset<*>>(sumCriteria
.and(maxCriteria)
.and(minCriteria)
.and(avgCriteria))
注解
otherResults
will contain 24 items, one result per calculated aggregate function per currency (the
grouping attribute - currency in this case - is returned per aggregate result).
注解
otherResults
将会包含 24 个 items,每个 currency 的每个局和方法会有一个结果(这里的分组属性 currency 会和每一个聚合结果返回回来)。
Sum aggregation on cash grouped by issuer party and currency and sorted by sum:
对 cash 按照 issuer party 进行分组并按照 sum 排序的 sum 聚合查询:
val sum = builder {
CashSchemaV1.PersistentCashState::pennies.sum(groupByColumns = listOf(CashSchemaV1.PersistentCashState::issuerPartyHash,
CashSchemaV1.PersistentCashState::currency),
orderBy = Sort.Direction.DESC)
}
val results = vaultService.queryBy<FungibleAsset<*>>(VaultCustomQueryCriteria(sum))
注解
otherResults
will contain 12 items sorted from largest summed cash amount to smallest, one result per
calculated aggregate function per issuer party and currency (grouping attributes are returned per aggregate result).
注解
otherResults
会包含从加和最大到最小的 cash 数量共 12个 items,每个 issuer party 和 currency 的聚合方法会有一个结果。
Dynamic queries (also using VaultQueryCriteria
) are an extension to the snapshot queries by returning an
additional QueryResults
return type in the form of an Observable<Vault.Update>
. Refer to
ReactiveX Observable for a detailed understanding and usage of
this type.
动态查询(同样使用 VaultQueryCriteria
)是对于 snapshot 查询的一个扩展,其返回了一个额外的 QueryResults
返回类型,作为一个 Observable<Vault.Update>
形式返回。参考 ReactiveX Observable 来了解详细内容和理解怎么使用这种类型。
Track unconsumed cash states:
跟踪 unconsumed cash states:
vaultService.trackBy<Cash.State>().updates // UNCONSUMED default
Track unconsumed linear states:
跟踪 unconsumed linear states:
val (snapshot, updates) = vaultService.trackBy<LinearState>()
注解
This will return both DealState
and LinearState
states.
注解
这会返回 DealState
和 LinearState
states。
Track unconsumed deal states:
跟踪 unconsumed deal states:
val (snapshot, updates) = vaultService.trackBy<DealState>()
注解
This will return only DealState
states.
注解
这个会只返回 DealState
states。
Java¶
Query for all unconsumed linear states:
查询所有 unconsumed linear states:
Vault.Page<LinearState> results = vaultService.queryBy(LinearState.class);
Query for all consumed cash states:
查询所有 consumed cash states:
VaultQueryCriteria criteria = new VaultQueryCriteria(Vault.StateStatus.CONSUMED);
Vault.Page<Cash.State> results = vaultService.queryBy(Cash.State.class, criteria);
Query for consumed deal states or linear ids, specify a paging specification and sort by unique identifier:
查询 consumed deal states 或者 linear ids,指定了一个 paging specification 并且按照 unique identifier 排序:
Vault.StateStatus status = Vault.StateStatus.CONSUMED;
@SuppressWarnings("unchecked")
Set<Class<LinearState>> contractStateTypes = new HashSet(singletonList(LinearState.class));
QueryCriteria vaultCriteria = new VaultQueryCriteria(status, contractStateTypes);
List<UniqueIdentifier> linearIds = singletonList(ids.getSecond());
QueryCriteria linearCriteriaAll = new LinearStateQueryCriteria(null, linearIds, Vault.StateStatus.UNCONSUMED, null);
QueryCriteria dealCriteriaAll = new LinearStateQueryCriteria(null, null, dealIds);
QueryCriteria compositeCriteria1 = dealCriteriaAll.or(linearCriteriaAll);
QueryCriteria compositeCriteria2 = compositeCriteria1.and(vaultCriteria);
PageSpecification pageSpec = new PageSpecification(DEFAULT_PAGE_NUM, MAX_PAGE_SIZE);
Sort.SortColumn sortByUid = new Sort.SortColumn(new SortAttribute.Standard(Sort.LinearStateAttribute.UUID), Sort.Direction.DESC);
Sort sorting = new Sort(ImmutableSet.of(sortByUid));
Vault.Page<LinearState> results = vaultService.queryBy(LinearState.class, compositeCriteria2, pageSpec, sorting);
Query for all states using a pagination specification and iterate using the totalStatesAvailable field until no further pages available:
使用一个分页说明来查询所有的 states,并且反复使用 totalStatesAvailable 字段直到最后一页:
int pageNumber = DEFAULT_PAGE_NUM;
List<StateAndRef<Cash.State>> states = new ArrayList<>();
long totalResults;
do {
PageSpecification pageSpec = new PageSpecification(pageNumber, pageSize);
Vault.Page<Cash.State> results = vaultService.queryBy(Cash.State.class, new VaultQueryCriteria(), pageSpec);
totalResults = results.getTotalStatesAvailable();
List<StateAndRef<Cash.State>> newStates = results.getStates();
System.out.println(newStates.size());
states.addAll(results.getStates());
pageNumber++;
} while ((pageSize * (pageNumber - 1) <= totalResults));
Aggregate Function queries using VaultCustomQueryCriteria
:
使用 VaultCustomQueryCriteria
来实现**聚合查询**:
Aggregations on cash using various functions:
使用多种方法对 cash 进行聚合查询:
FieldInfo pennies = getField("pennies", CashSchemaV1.PersistentCashState.class);
QueryCriteria sumCriteria = new VaultCustomQueryCriteria(sum(pennies));
QueryCriteria countCriteria = new VaultCustomQueryCriteria(Builder.count(pennies));
QueryCriteria maxCriteria = new VaultCustomQueryCriteria(Builder.max(pennies));
QueryCriteria minCriteria = new VaultCustomQueryCriteria(Builder.min(pennies));
QueryCriteria avgCriteria = new VaultCustomQueryCriteria(Builder.avg(pennies));
QueryCriteria criteria = sumCriteria.and(countCriteria).and(maxCriteria).and(minCriteria).and(avgCriteria);
Vault.Page<Cash.State> results = vaultService.queryBy(Cash.State.class, criteria);
Aggregations on cash grouped by currency for various functions:
使用多种方法按照 currency 进行分组对 cash 进行聚合查询:
FieldInfo pennies = getField("pennies", CashSchemaV1.PersistentCashState.class);
FieldInfo currency = getField("currency", CashSchemaV1.PersistentCashState.class);
QueryCriteria sumCriteria = new VaultCustomQueryCriteria(sum(pennies, singletonList(currency)));
QueryCriteria countCriteria = new VaultCustomQueryCriteria(Builder.count(pennies));
QueryCriteria maxCriteria = new VaultCustomQueryCriteria(Builder.max(pennies, singletonList(currency)));
QueryCriteria minCriteria = new VaultCustomQueryCriteria(Builder.min(pennies, singletonList(currency)));
QueryCriteria avgCriteria = new VaultCustomQueryCriteria(Builder.avg(pennies, singletonList(currency)));
QueryCriteria criteria = sumCriteria.and(countCriteria).and(maxCriteria).and(minCriteria).and(avgCriteria);
Vault.Page<Cash.State> results = vaultService.queryBy(Cash.State.class, criteria);
Sum aggregation on cash grouped by issuer party and currency and sorted by sum:
对 cash 按照 issuer party 进行分组并按照 sum 排序的 sum 聚合查询:
FieldInfo pennies = getField("pennies", CashSchemaV1.PersistentCashState.class);
FieldInfo currency = getField("currency", CashSchemaV1.PersistentCashState.class);
FieldInfo issuerPartyHash = getField("issuerPartyHash", CashSchemaV1.PersistentCashState.class);
QueryCriteria sumCriteria = new VaultCustomQueryCriteria(sum(pennies, asList(issuerPartyHash, currency), Sort.Direction.DESC));
Vault.Page<Cash.State> results = vaultService.queryBy(Cash.State.class, sumCriteria);
Track unconsumed cash states:
跟踪 unconsumed cash states:
@SuppressWarnings("unchecked")
Set<Class<ContractState>> contractStateTypes = new HashSet(singletonList(Cash.State.class));
VaultQueryCriteria criteria = new VaultQueryCriteria(Vault.StateStatus.UNCONSUMED, contractStateTypes);
DataFeed<Vault.Page<ContractState>, Vault.Update<ContractState>> results = vaultService.trackBy(ContractState.class, criteria);
Vault.Page<ContractState> snapshot = results.getSnapshot();
Track unconsumed deal states or linear states (with snapshot including specification of paging and sorting by unique identifier):
跟踪 unconsumed deal states 或者 linear states(带有包括分页说明以及按照 unique identifier 排序的 snapshot):
@SuppressWarnings("unchecked")
Set<Class<ContractState>> contractStateTypes = new HashSet(asList(DealState.class, LinearState.class));
QueryCriteria vaultCriteria = new VaultQueryCriteria(Vault.StateStatus.UNCONSUMED, contractStateTypes);
List<UniqueIdentifier> linearIds = singletonList(uid);
List<AbstractParty> dealParty = singletonList(MEGA_CORP.getParty());
QueryCriteria dealCriteria = new LinearStateQueryCriteria(dealParty, null, dealIds);
QueryCriteria linearCriteria = new LinearStateQueryCriteria(dealParty, linearIds, Vault.StateStatus.UNCONSUMED, null);
QueryCriteria dealOrLinearIdCriteria = dealCriteria.or(linearCriteria);
QueryCriteria compositeCriteria = dealOrLinearIdCriteria.and(vaultCriteria);
PageSpecification pageSpec = new PageSpecification(DEFAULT_PAGE_NUM, MAX_PAGE_SIZE);
Sort.SortColumn sortByUid = new Sort.SortColumn(new SortAttribute.Standard(Sort.LinearStateAttribute.UUID), Sort.Direction.DESC);
Sort sorting = new Sort(ImmutableSet.of(sortByUid));
DataFeed<Vault.Page<ContractState>, Vault.Update<ContractState>> results = vaultService.trackBy(ContractState.class, compositeCriteria, pageSpec, sorting);
Vault.Page<ContractState> snapshot = results.getSnapshot();
Troubleshooting¶
If the results your were expecting do not match actual returned query results we recommend you add an entry to your
log4j2.xml
configuration file to enable display of executed SQL statements:
如果你期望的结果同真实返回的结果不匹配的话,我们建议你向你的 log4j2.xml
配置文件中添加一个配置项来开启所执行的 SQL 语句:
<Logger name=”org.hibernate.SQL” level=”debug” additivity=”false”>
<AppenderRef ref=”Console-Appender”/></Logger>
行为笔记¶
TrackBy
updates do not take into account the full criteria specification due to different and more restrictive syntax in observables filtering (vs full SQL-92 JDBC filtering as used in snapshot views). Specifically, dynamic updates are filtered bycontractStateType
andstateType
(UNCONSUMED, CONSUMED, ALL) onlyTrackBy
更新不会考虑全部的条件说明,因为 observables 过滤更严格的语法和不同(对比像 在 snapshot 视图中使用的全部的 SQL-92 JDBC)。特别的,动态的更新仅仅能够被contractStateType
和 ``stateType``(UNCONSUMED, CONSUMED, ALL)过滤。QueryBy
andTrackBy
snapshot views using pagination may return different result sets as each paging request is a separate SQL query on the underlying database, and it is entirely conceivable that state modifications are taking place in between and/or in parallel to paging requests. When using pagination, always check the value of thetotalStatesAvailable
(from theVault.Page
result) and adjust further paging requests appropriately.使用分页的
QueryBy
和TrackBy
snapshot 视图可能会返回不同的结果集,因为每个分页的请求在底层数据库看来都属于一个独立的 SQL 查询,并且在分页请求的同事对于 state 的更新也极有可能正在发生。当使用分页的时候,经常检查totalStatesAvailable
的值(从Vault.Page
结构)并且适当地调整更进一步的分页请求。
其他的使用场景¶
For advanced use cases that require sophisticated pagination, sorting, grouping, and aggregation functions, it is recommended that the CorDapp developer utilise one of the many proven frameworks that ship with this capability out of the box. Namely, implementations of JPQL (JPA Query Language) such as Hibernate for advanced SQL access, and Spring Data for advanced pagination and ordering constructs.
对于需要复杂的分页、排序、分组和聚合方法的情况,我们建议 CorDapp 开发者使用一些具有这些功能的已经经过验证的框架。比如 JPQL (JPA Query Language) 的一些实现,比如为了高级 SQL 访问的 Hibernate,以及为了高级分页和排序构建的 Sprint Data。
The Corda Tutorials provide examples satisfying these additional Use Cases:
- Example CorDapp service using Vault API Custom Query to access attributes of IOU State
- Example CorDapp service query extension executing Named Queries via JPQL
- Advanced pagination queries using Spring Data JPA
Corda 教程提供了满足这些情况的例子:
将所有者的秘钥跟外部 IDs 映射¶
When creating new public keys via the KeyManagementService
, it is possible to create an association between the newly created public
key and an external ID. This, in effect, allows CorDapp developers to group state ownership/participation keys by an account ID.
当通过 KeyManagementService
创建新的公钥的时候,在新创建的公钥和一个外部的 ID 之间创建一个关联是可能的。这就允许了 CorDapp 的开发者能够根据一个账户 ID 对 state ownership/participation 秘钥进行分组。
注解
This only works with freshly generated public keys and not the node’s legal identity key. If you require that the freshly
generated keys be for the node’s identity then use PersistentKeyManagementService.freshKeyAndCert
instead of freshKey
.
Currently, the generation of keys for other identities is not supported.
注解
这个仅仅适用于新生成的公钥而 不是 节点的 legal identity key。如果你需要这个新生成的秘钥被用作节点的 identity 的话,那么使用 PersistentKeyManagementService.freshKeyAndCert
而不是 freshKey
。当前,生成其他的 identities 的秘钥是不支持的。
The code snippet below show how keys can be associated with an external ID by using the exposed JPA functionality:
表现的代码显示了通过使用暴露的 JPA 方法如何把一个秘钥同一个外部 ID 关联在一起:
public AnonymousParty freshKeyForExternalId(UUID externalId, ServiceHub services) {
// Create a fresh key pair and return the public key.
AnonymousParty anonymousParty = freshKey();
// Associate the fresh key to an external ID.
services.withEntityManager(entityManager -> {
PersistentKeyManagementService.PublicKeyHashToExternalId mapping = PersistentKeyManagementService.PublicKeyHashToExternalId(externalId, anonymousParty.owningKey);
entityManager.persist(mapping);
return null;
});
return anonymousParty;
}
fun freshKeyForExternalId(externalId: UUID, services: ServiceHub): AnonymousParty {
// Create a fresh key pair and return the public key.
val anonymousParty = freshKey()
// Associate the fresh key to an external ID.
services.withEntityManager {
val mapping = PersistentKeyManagementService.PublicKeyHashToExternalId(externalId, anonymousParty.owningKey)
persist(mapping)
}
return anonymousParty
}
As can be seen in the code snippet above, the PublicKeyHashToExternalId
entity has been added to PersistentKeyManagementService
,
which allows you to associate your public keys with external IDs. So far, so good.
像上边的代码中看到的那样,PublicKeyHashToExternalId
entity 被添加到了 PersistentKeyManagementService
,这就允许你能够把你的公钥跟一个外部的 IDs 关联起来。
注解
Here, it is worth noting that we must map owning keys to external IDs, as opposed to state objects. This is because it
might be the case that a LinearState
is owned by two public keys generated by the same node.
注解
这里,我们必须要把 owning keys 映射到 外部的 IDs 是没有意义的,因为它与 state objects 相反。这是因为可能一个 LinearState
会被同一节点生成的两个公钥同时拥有。
The intuition here is that when these public keys are used to own or participate in a state object, it is trivial to then associate those
states with a particular external ID. Behind the scenes, when states are persisted to the vault, the owning keys for each state are
persisted to a PersistentParty
table. The PersistentParty
table can be joined with the PublicKeyHashToExternalId
table to create
a view which maps each state to one or more external IDs. The entity relationship diagram below helps to explain how this works.
很直观地能看到,当这些公钥被用来拥有或者参与一个 state 对象的时候,将这些 states 关联到一个特定的外部 ID 就没有太大的意义了。在这些场景的背后,当 states 被持久化到 vault 的时候,每个 state 的 owning keys 都会被持久化到 PersistentParty
表,这就创建了一个将每个 state 映射到一个或多个外部 IDs 的视图。下边的 Entity 关系图能够帮助解释这些是怎么工作的。

When performing a vault query, it is now possible to query for states by external ID using a custom query criteria.
当进行一次 vault 查询的时候,现在使用一个自定义的查询条件来通过外部 ID 查询 states 是可能的。
UUID id = someExternalId;
FieldInfo externalIdField = getField("externalId", VaultSchemaV1.StateToExternalId.class);
CriteriaExpression externalId = Builder.equal(externalIdField, id);
QueryCriteria query = new VaultCustomQueryCriteria(externalId);
Vault.Page<StateType> results = vaultService.queryBy(StateType.class, query);
val id: UUID = someExternalId
val externalId = builder { VaultSchemaV1.StateToExternalId::externalId.equal(id) }
val queryCriteria = QueryCriteria.VaultCustomQueryCriteria(externalId)
val results = vaultService.queryBy<StateType>(queryCriteria).states
The VaultCustomQueryCriteria
can also be combined with other query criteria, like custom schemas, for instance. See the vault query API
examples above for how to combine QueryCriteria
.
VaultCustomQueryCriteria
也能够跟其他的查询条件合并使用,比如像自定义 schemas。查看上边的 vault 查询 API 例子来了解如何合并 QueryCriteria
。
API: Transactions¶
注解
Before reading this page, you should be familiar with the key concepts of Transactions.
注解
当阅读这里的时候,你应该已经熟悉了核心概念 Transactions。
目录
Transaction 生命周期¶
Between its creation and its final inclusion on the ledger, a transaction will generally occupy one of three states:
TransactionBuilder
. A transaction’s initial state. This is the only state during which the transaction is mutable, so we must add all the required components before moving on.SignedTransaction
. The transaction now has one or more digital signatures, making it immutable. This is the transaction type that is passed around to collect additional signatures and that is recorded on the ledger.LedgerTransaction
. The transaction has been “resolved” - for example, its inputs have been converted from references to actual states - allowing the transaction to be fully inspected.
从它被创建到最终被添加到账本中,每个 transaction 会大体占用 3种状态中的一种:
TransactionBuilder
。这个是 transaction 的初始状态。这也是 transaction 唯一可以被修改的一个状态,所以在进行下一步之前我们必须要确保添加了所有必须的组件。SignedTransaction
。现在的 transaction 已经有了一个或者更多的数字签名,并且已经是不可修改了。这个会是在不同的节点间传递来获得更多签名的 transaction 类型,也是会最终被记录到账本中的 transaction。LedgerTransaction
。这个 transaction 已经被“解决”掉了。比如它的 inputs 已经从引用被转换为实际的 states 了 - 允许 transaction 被彻底地检查。
We can visualise the transitions between the three stages as follows:
我们可以用下图来表示 transactions 在三个状态中的转换:

Transaction 组件¶
A transaction consists of six types of components:
- 1+ states:
- 0+ input states
- 0+ output states
- 0+ reference input states
- 1+ commands
- 0+ attachments
- 0 or 1 time-window
- A transaction with a time-window must also have a notary
一个 transaction 包括六种类型的组件:
- 1+ states:
- 0+ input states
- 0+ output states
- 0+ reference input states
- 1+ commands
- 0+ attachments
- 0 或者 1 time-window
- 带有 time-window 的 transaction 还必须要有一个 notary
Each component corresponds to a specific class in the Corda API. The following section describes each component class, and how it is created.
每个组件都对应于 Corda API 中的一个指定的类。下边的部分描述了每个组件的类,和他们是如何被创建的。
Input states¶
An input state is added to a transaction as a StateAndRef
, which combines:
- The
ContractState
itself - A
StateRef
identifying thisContractState
as the output of a specific transaction
Input states 是以 StateAndRef
实例的形式添加进 transaction 的,它包括:
ContractState
本身- 一个
StateRef
用来识别作为一个指定的 transaction 的 output 的该ContractState
val ourStateAndRef: StateAndRef<DummyState> = serviceHub.toStateAndRef<DummyState>(ourStateRef)
StateAndRef ourStateAndRef = getServiceHub().toStateAndRef(ourStateRef);
A StateRef
uniquely identifies an input state, allowing the notary to mark it as historic. It is made up of:
- The hash of the transaction that generated the state
- The state’s index in the outputs of that transaction
一个 StateRef
唯一地识别了一个 input state,允许 notary 可以将它标记为一个历史记录。它由下边的元素组成:
- 产生该 state 的 transaction 的哈希值
- 该 state 在这个 transaction 中的 outputs 列表中的索引值
val ourStateRef: StateRef = StateRef(SecureHash.sha256("DummyTransactionHash"), 0)
StateRef ourStateRef = new StateRef(SecureHash.sha256("DummyTransactionHash"), 0);
The StateRef
links an input state back to the transaction that created it. This means that transactions form
“chains” linking each input back to an original issuance transaction. This allows nodes verifying the transaction
to “walk the chain” and verify that each input was generated through a valid sequence of transactions.
StateRef
将一个 input 连接回来产生它的那次 transaction。这就意味着那个 transaction 形成了一个“链条”,这个链条将每个 input 都同产生它的原始 transaction 链接在了一起。这就允许了节点可以回溯整条链来确认一个新的 transaction 并且确保了每个 input 都是通过一个有效的并且有序的 transaction 来产生的。
引用 input states¶
警告
Reference states are only available on Corda networks with a minimum platform version >= 4.
警告
引用类型的 states 只有在平台版本 >= 4 的 Corda 网络中才可用。
A reference input state is added to a transaction as a ReferencedStateAndRef
. A ReferencedStateAndRef
can be
obtained from a StateAndRef
by calling the StateAndRef.referenced()
method which returns a ReferencedStateAndRef
.
一个引用类型的 state 是以 ReferencedStateAndRef
形式被添加到一个交易中的。一个 A ReferencedStateAndRef
可以通过调用 StateAndRef.referenced()
方法来从一个 StateAndRef
获得,这回返回一个 ReferencedStateAndRef
。
val referenceState: ReferencedStateAndRef<DummyState> = ourStateAndRef.referenced()
ReferencedStateAndRef referenceState = ourStateAndRef.referenced();
处理更新比赛:
When using reference states in a transaction, it may be the case that a notarisation failure occurs. This is most likely because the creator of the state (being used as a reference state in your transaction), has just updated it.
当在交易中使用引用类型的 states 的时候,可能会发生公证失败的错误。这很有可能是因为 state 的创建者(在你的交易中被作为一个引用类型的 state 被使用),刚刚更新了它。
Typically, the creator of such reference data will have implemented flows for syndicating the updates out to users. However it is inevitable that there will be a delay between the state being used as a reference being consumed, and the nodes using it receiving the update.
通常,这类引用类型的数据的创建者将会具有为用户提供更新的已经实现的 flows。然而,这个必然在正在被消费的作为一个引用的 state 正在被使用和节点正在使用它来接收更新之间会有延迟。
This is where the WithReferencedStatesFlow
comes in. Given a flow which uses reference states, the
WithReferencedStatesFlow
will execute the the flow as a subFlow. If the flow fails due to a NotaryError.Conflict
for a reference state, then it will be suspended until the state refs for the reference states are consumed. In this
case, a consumption means that:
- the owner of the reference state has updated the state with a valid, notarised transaction
- the owner of the reference state has shared the update with the node attempting to run the flow which uses the reference state
- The node has successfully committed the transaction updating the reference state (and all the dependencies), and added the updated reference state to the vault.
这就带来了 WithReferencedStatesFlow
。给定一个使用引用类型 states 的一个 flow,WithReferencedStatesFlow``将会以一个 subflow 的方式执行这个 flow。如果这个 flow 因为对于一个引用的 state 的 ``NotaryError.Conflict
原因而失败了的话,那么它就会被挂起,直到引用这个引用类型的 state 的 state 被消费掉。在这个情况下,一个消费代表着:
- 这个引用 state 的所有者已经使用一个有效的经过公证的交易更新了这个 state
- 这个引用 state 的所有者已经跟尝试运行使用这个引用 state 的 flow 的节点共享了更新
- 这个节点已经成功地提交了更新这个引用 state 的交易(以及所有的依赖),并且将这个更新过的引用 state 添加到 vault
At the point where the transaction updating the state being used as a reference is committed to storage and the vault
update occurs, then the WithReferencedStatesFlow
will wake up and re-execute the provided flow.
当更新作为一个引用的 state 的交易被提交并且 vault 的更新发生的时候,WithReferencedStatesFlow
会被唤醒并且会重新执行提供的 flow。
警告
Caution should be taken when using this flow as it facilitates automated re-running of flows which use reference states. The flow using reference states should include checks to ensure that the reference data is reasonable, especially if the economics of the transaction depends upon the data contained within a reference state.
警告
当使用这个 flow 的时候要特别小心,因为它会协助使用引用 states 的 flow 自动地重新运行。使用引用 states 的 flow 应该包含一个检查来确保这个引用的数据是有道理的,特别当交易的情况依赖于在一个引用 state 内包含的数据。
Output states¶
Since a transaction’s output states do not exist until the transaction is committed, they cannot be referenced as the
outputs of previous transactions. Instead, we create the desired output states as ContractState
instances, and
add them to the transaction directly:
因为一个 transaction 的 output states 在 transaction 被最终提交前是不存在的,所以他们不能够被之前的 transaction 进行引用。相反,我们通过创建 ContractState
实例的方式创建想要的 output states,并直接把他们添加到 transaction 中:
val ourOutputState: DummyState = DummyState()
DummyState ourOutputState = new DummyState();
In cases where an output state represents an update of an input state, we may want to create the output state by basing it on the input state:
当一个 output 会作为一个 input 的更新版本的时候,我们可能会希望基于原始的这个 input state 来创建一个新的 output state:
val ourOtherOutputState: DummyState = ourOutputState.copy(magicNumber = 77)
DummyState ourOtherOutputState = ourOutputState.copy(77);
Before our output state can be added to a transaction, we need to associate it with a contract. We can do this by
wrapping the output state in a StateAndContract
, which combines:
- The
ContractState
representing the output states - A
String
identifying the contract governing the state
当我们的 output state 在能够被添加到一个 transaction 之前,我们需要将它同一个 contract 关联起来。我们可以通过将这个 output state 放入一个 StateAndContract
中,它将下边两个元素整合在了一起:
ContractState
代表了 output state- 一个
String
用来识别决定该 state 的 contract
val ourOutput: StateAndContract = StateAndContract(ourOutputState, DummyContract.PROGRAM_ID)
StateAndContract ourOutput = new StateAndContract(ourOutputState, DummyContract.PROGRAM_ID);
Commands¶
A command is added to the transaction as a Command
, which combines:
- A
CommandData
instance indicating the command’s type - A
List<PublicKey>
representing the command’s required signers
一个 command 是做为 Command
实例被添加到一个 transaction 中的。Command 包含:
- 一个
CommandData
实例,它代表了 command 的类型 - 一个
List<PublicKey>
代表了 command 所要求的签名者的列表
val commandData: DummyContract.Commands.Create = DummyContract.Commands.Create()
val ourPubKey: PublicKey = serviceHub.myInfo.legalIdentitiesAndCerts.first().owningKey
val counterpartyPubKey: PublicKey = counterparty.owningKey
val requiredSigners: List<PublicKey> = listOf(ourPubKey, counterpartyPubKey)
val ourCommand: Command<DummyContract.Commands.Create> = Command(commandData, requiredSigners)
DummyContract.Commands.Create commandData = new DummyContract.Commands.Create();
PublicKey ourPubKey = getServiceHub().getMyInfo().getLegalIdentitiesAndCerts().get(0).getOwningKey();
PublicKey counterpartyPubKey = counterparty.getOwningKey();
List<PublicKey> requiredSigners = ImmutableList.of(ourPubKey, counterpartyPubKey);
Command<DummyContract.Commands.Create> ourCommand = new Command<>(commandData, requiredSigners);
Attachments¶
Attachments are identified by their hash:
Attachments 附件是通过他们的哈希值来识别的:
val ourAttachment: SecureHash = SecureHash.sha256("DummyAttachment")
SecureHash ourAttachment = SecureHash.sha256("DummyAttachment");
The attachment with the corresponding hash must have been uploaded ahead of time via the node’s RPC interface.
具有相应的哈希值的附件必须要提前通过节点的 RPC 接口上传到 ledger 中。
Time-windows¶
Time windows represent the period during which the transaction must be notarised. They can have a start and an end time, or be open at either end:
Time windows 代表了一个时间区间,transaction 必须要在这个时间区间内被公正。它可以有一个起始和终止时间,或者是一个开放的区间:
val ourTimeWindow: TimeWindow = TimeWindow.between(Instant.MIN, Instant.MAX)
val ourAfter: TimeWindow = TimeWindow.fromOnly(Instant.MIN)
val ourBefore: TimeWindow = TimeWindow.untilOnly(Instant.MAX)
TimeWindow ourTimeWindow = TimeWindow.between(Instant.MIN, Instant.MAX);
TimeWindow ourAfter = TimeWindow.fromOnly(Instant.MIN);
TimeWindow ourBefore = TimeWindow.untilOnly(Instant.MAX);
We can also define a time window as an Instant
plus/minus a time tolerance (e.g. 30 seconds):
我们也可以定义一个包含一个 Instant 和正/负时间差的 time window(比如加/减 30 秒钟):
val ourTimeWindow2: TimeWindow = TimeWindow.withTolerance(serviceHub.clock.instant(), 30.seconds)
TimeWindow ourTimeWindow2 = TimeWindow.withTolerance(getServiceHub().getClock().instant(), Duration.ofSeconds(30));
Or as a start-time plus a duration:
或者包含一个起始时间加上一个时间段:
val ourTimeWindow3: TimeWindow = TimeWindow.fromStartAndDuration(serviceHub.clock.instant(), 30.seconds)
TimeWindow ourTimeWindow3 = TimeWindow.fromStartAndDuration(getServiceHub().getClock().instant(), Duration.ofSeconds(30));
TransactionBuilder¶
创建一个 builder¶
The first step when creating a transaction proposal is to instantiate a TransactionBuilder
.
创建一个 transaction proposal 的第一步是实例化一个 TransactionBuilder
。
If the transaction has input states or a time-window, we need to instantiate the builder with a reference to the notary that will notarise the inputs and verify the time-window:
如果一个 transaction 包含 input states 或者一个 time-window 的话,我们需要实例化这个 builder 并且需要有一个关于 notary 的引用,这个 notary 会对 inputs 进行公正并且验证这个 time-window:
val txBuilder: TransactionBuilder = TransactionBuilder(specificNotary)
TransactionBuilder txBuilder = new TransactionBuilder(specificNotary);
We discuss the selection of a notary in API: Flows.
我们在 API: Flows 讨论了如何选择一个 notary。
If the transaction does not have any input states or a time-window, it does not require a notary, and can be instantiated without one:
如果一个 transaction 没有任何的 input states 或者 time-window 的话,那就不需要指定 notary 来实例化了:
val txBuilderNoNotary: TransactionBuilder = TransactionBuilder()
TransactionBuilder txBuilderNoNotary = new TransactionBuilder();
添加 items¶
The next step is to build up the transaction proposal by adding the desired components.
下一步就是通过添加期望的组件来构建 transaction。
We can add components to the builder using the TransactionBuilder.withItems
method:
我们可以使用 TransactionBuilder.withItems
方法来向 builder 中增加组件:
/** A more convenient way to add items to this transaction that calls the add* methods for you based on type */
fun withItems(vararg items: Any) = apply {
for (t in items) {
when (t) {
is StateAndRef<*> -> addInputState(t)
is ReferencedStateAndRef<*> -> addReferenceState(t)
is SecureHash -> addAttachment(t)
is TransactionState<*> -> addOutputState(t)
is StateAndContract -> addOutputState(t.state, t.contract)
is ContractState -> throw UnsupportedOperationException("Removed as of V1: please use a StateAndContract instead")
is Command<*> -> addCommand(t)
is CommandData -> throw IllegalArgumentException("You passed an instance of CommandData, but that lacks the pubkey. You need to wrap it in a Command object first.")
is TimeWindow -> setTimeWindow(t)
is PrivacySalt -> setPrivacySalt(t)
else -> throw IllegalArgumentException("Wrong argument type: ${t.javaClass}")
}
}
}
withItems
takes a vararg
of objects and adds them to the builder based on their type:
StateAndRef
objects are added as input statesReferencedStateAndRef
objects are added as reference input statesTransactionState
andStateAndContract
objects are added as output states- Both
TransactionState
andStateAndContract
are wrappers around aContractState
output that link the output to a specific contract
- Both
Command
objects are added as commandsSecureHash
objects are added as attachments- A
TimeWindow
object replaces the transaction’s existingTimeWindow
, if any
withItems
使用了一个由对象构成的 vararg
,并根据他们的类型向 builder 中添加内容:
StateAndRef
对象是作为 input states 被添加ReferencedStateAndRef
对象作为引用类型的 input states 被添加TransactionState
和StateAndContract
对象是作为 output states 被添加TransactionState
和StateAndContract
会被 wrapper 成一个ContractState
output,这就将 output 和一个指定的 contract 链接到了一起
Command
对象是作为 commands 被添加SecureHash
对象是作为附件被添加的- 如果 transaction 中已经存在
TimeWindow
的话,那么这里的TimeWindow
对象会替换掉那个已经存在的TimeWindow
Passing in objects of any other type will cause an IllegalArgumentException
to be thrown.
传入任何其他类型的对象将会造成一个 IllegalArgumentException
被抛出。
Here’s an example usage of TransactionBuilder.withItems
:
下边是一个如何使用 TransactionBuilder.withItems
的实例代码:
txBuilder.withItems(
// Inputs, as ``StateAndRef``s that reference the outputs of previous transactions
ourStateAndRef,
// Outputs, as ``StateAndContract``s
ourOutput,
// Commands, as ``Command``s
ourCommand,
// Attachments, as ``SecureHash``es
ourAttachment,
// A time-window, as ``TimeWindow``
ourTimeWindow
)
txBuilder.withItems(
// Inputs, as ``StateAndRef``s that reference to the outputs of previous transactions
ourStateAndRef,
// Outputs, as ``StateAndContract``s
ourOutput,
// Commands, as ``Command``s
ourCommand,
// Attachments, as ``SecureHash``es
ourAttachment,
// A time-window, as ``TimeWindow``
ourTimeWindow
);
There are also individual methods for adding components.
这里也有独立的方法来添加不同的组件。
Here are the methods for adding inputs and attachments:
添加 inputs 和 附件的方法:
txBuilder.addInputState(ourStateAndRef)
txBuilder.addAttachment(ourAttachment)
txBuilder.addInputState(ourStateAndRef);
txBuilder.addAttachment(ourAttachment);
An output state can be added as a ContractState
, contract class name and notary:
一个 output state 可以作为 ContractState
,contract 类名和 notary 来添加:
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary)
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary);
We can also leave the notary field blank, in which case the transaction’s default notary is used:
我们也可以将 notary 字段留空,那么 transaction 的默认 notary 就会被使用了:
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID)
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID);
Or we can add the output state as a TransactionState
, which already specifies the output’s contract and notary:
或者我们可以将一个 output state 作为 TransactionState
来添加,它已经指定了 output 的 contract 和 notary:
val txState: TransactionState<DummyState> = TransactionState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary)
TransactionState txState = new TransactionState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary);
Commands can be added as a Command
:
Commands 可以作为 Command
被添加:
txBuilder.addCommand(ourCommand)
txBuilder.addCommand(ourCommand);
Or as CommandData
and a vararg PublicKey
:
或者作为 CommandData
和一个 vararg PublicKey
:
txBuilder.addCommand(commandData, ourPubKey, counterpartyPubKey)
txBuilder.addCommand(commandData, ourPubKey, counterpartyPubKey);
For the time-window, we can set a time-window directly:
对于 time-window,我们可以直接设定 time-window:
txBuilder.setTimeWindow(ourTimeWindow)
txBuilder.setTimeWindow(ourTimeWindow);
Or define the time-window as a time plus a duration (e.g. 45 seconds):
或者将 time-window 定义为一个时间加上一个时间差(比如 45 秒钟):
txBuilder.setTimeWindow(serviceHub.clock.instant(), 45.seconds)
txBuilder.setTimeWindow(getServiceHub().getClock().instant(), Duration.ofSeconds(45));
为 builder 签名¶
Once the builder is ready, we finalize it by signing it and converting it into a SignedTransaction
.
一旦 builder 准备好了,我们就可以通过签名的方式将它变为一个 SignedTransaction
。
We can either sign with our legal identity key:
我们可以使用我们的 legal identity key 来签名:
val onceSignedTx: SignedTransaction = serviceHub.signInitialTransaction(txBuilder)
SignedTransaction onceSignedTx = getServiceHub().signInitialTransaction(txBuilder);
Or we can also choose to use another one of our public keys:
或者也可以选择使用我们的另一个公钥(public key)来签名:
val otherIdentity: PartyAndCertificate = serviceHub.keyManagementService.freshKeyAndCert(ourIdentityAndCert, false)
val onceSignedTx2: SignedTransaction = serviceHub.signInitialTransaction(txBuilder, otherIdentity.owningKey)
PartyAndCertificate otherIdentity = getServiceHub().getKeyManagementService().freshKeyAndCert(getOurIdentityAndCert(), false);
SignedTransaction onceSignedTx2 = getServiceHub().signInitialTransaction(txBuilder, otherIdentity.getOwningKey());
Either way, the outcome of this process is to create an immutable SignedTransaction
with our signature over it.
任何的方式,这个流程的输出都会是创建了一个带有我们签名的无法修改的 SignedTransaction
。
SignedTransaction¶
A SignedTransaction
is a combination of:
- An immutable transaction
- A list of signatures over that transaction
一个 SignedTransaction
是下边内容的组合:
- 一个不可修改的 transaction
- 在这个 transaction 上的签名列表
@KeepForDJVM
@CordaSerializable
data class SignedTransaction(val txBits: SerializedBytes<CoreTransaction>,
override val sigs: List<TransactionSignature>
) : TransactionWithSignatures {
Before adding our signature to the transaction, we’ll want to verify both the transaction’s contents and the transaction’s signatures.
当提供我们的签名之前,我们会既要确认 transaction 的内容,也有确认 transaction 的签名。
确认 transaction 的内容¶
If a transaction has inputs, we need to retrieve all the states in the transaction’s dependency chain before we can
verify the transaction’s contents. This is because the transaction is only valid if its dependency chain is also valid.
We do this by requesting any states in the chain that our node doesn’t currently have in its local storage from the
proposer(s) of the transaction. This process is handled by a built-in flow called ReceiveTransactionFlow
.
See API: Flows for more details.
如果一个 transaction 含有 inputs 的话,在能够确认 transaction 的内容之前,我们需要取回这个 transaction 依赖的 transaction 链中的所有 states。这是因为只有当依赖链是有效的时候,这个 transaction 才会被认为是有效的。我们可以通过向发起 transaction 的一方来请求任何在当前结点的本地存储中没有 states 来最终验证整个 transaction 依赖链。这个流程是由一个内置的名为 ReceiveTransactionFlow
的方法来处理的。查看 API: Flows 了解详细信息。
We can now verify the transaction’s contents to ensure that it satisfies the contracts of all the transaction’s input and output states:
我们现在就可以验证 transaction 的内容来确保它的 input 和 output states 中的 contract code 中定义的约束都能满足:
twiceSignedTx.verify(serviceHub)
twiceSignedTx.verify(getServiceHub());
Checking that the transaction meets the contract constraints is only part of verifying the transaction’s contents. We will usually also want to perform our own additional validation of the transaction contents before signing, to ensure that the transaction proposal represents an agreement we wish to enter into.
检查 transaction 满足合约约束(contract constraints)只是验证 transaction 内容的一部分。通常我们也会在提供签名前,希望进行我们自己指定的额外的验证,来确保 transaction proposal 是我们真正想加入的一个协议。
However, the SignedTransaction
holds its inputs as StateRef
instances, and its attachments as SecureHash
instances, which do not provide enough information to properly validate the transaction’s contents. We first need to
resolve the StateRef
and SecureHash
instances into actual ContractState
and Attachment
instances, which
we can then inspect.
但是,SignedTransaction
将它的 inputs 以 StateRef
实例的形式保留,并且它的附件是作为 SecureHash
的实例,这并不能提供足够的信息来很好地验证 transaction 的内容。我们首先需要解决的是将 StateRef
和 SecureHash
实例化为真正的 ContractState
和 Attachment
的实例,然后我们就可以检查了。
We achieve this by using the ServiceHub
to convert the SignedTransaction
into a LedgerTransaction
:
我们通过使用 ServiceHub
来将 SignedTransaction
转换为一个 LedgerTransaction
:
val ledgerTx: LedgerTransaction = twiceSignedTx.toLedgerTransaction(serviceHub)
LedgerTransaction ledgerTx = twiceSignedTx.toLedgerTransaction(getServiceHub());
We can now perform our additional verification. Here’s a simple example:
我们现在就可以进行额外的验证了,下边是示例代码:
val outputState: DummyState = ledgerTx.outputsOfType<DummyState>().single()
if (outputState.magicNumber == 777) {
// ``FlowException`` is a special exception type. It will be
// propagated back to any counterparty flows waiting for a
// message from this flow, notifying them that the flow has
// failed.
throw FlowException("We expected a magic number of 777.")
}
DummyState outputState = ledgerTx.outputsOfType(DummyState.class).get(0);
if (outputState.getMagicNumber() != 777) {
// ``FlowException`` is a special exception type. It will be
// propagated back to any counterparty flows waiting for a
// message from this flow, notifying them that the flow has
// failed.
throw new FlowException("We expected a magic number of 777.");
}
确认 transaction 的签名¶
Aside from verifying that the transaction’s contents are valid, we also need to check that the signatures are valid. A valid signature over the hash of the transaction prevents tampering.
除了确认 transaction 的内容是有效的,我们也要检查签名是有效的。一个建立在 transaction 的哈希值的基础上有效的签名能够防止记录被篡改。
We can verify that all the transaction’s required signatures are present and valid as follows:
我们可以验证该 transaction 需要的所有的签名都已经被提供了:
fullySignedTx.verifyRequiredSignatures()
fullySignedTx.verifyRequiredSignatures();
However, we’ll often want to verify the transaction’s existing signatures before all of them have been collected. For
this we can use SignedTransaction.verifySignaturesExcept
, which takes a vararg
of the public keys for
which the signatures are allowed to be missing:
然而,在所有的签名被搜集到之前,我们通常也会希望先确认 transaction 里已经有的签名。我们可以使用 SignedTransaction.verifySignaturesExcept
,它带有一个公钥的 vararg
传入参数,它会允许该公钥不需要提供签名:
onceSignedTx.verifySignaturesExcept(counterpartyPubKey)
onceSignedTx.verifySignaturesExcept(counterpartyPubKey);
There is also an overload of SignedTransaction.verifySignaturesExcept
, which takes a Collection
of the
public keys for which the signatures are allowed to be missing:
这里还有一个对于 SignedTransaction.verifySignaturesExcept
的重载,它可以传入一个允许不提供签名的公钥的 集合
:
onceSignedTx.verifySignaturesExcept(listOf(counterpartyPubKey))
onceSignedTx.verifySignaturesExcept(singletonList(counterpartyPubKey));
If the transaction is missing any signatures without the corresponding public keys being passed in, a
SignaturesMissingException
is thrown.
如果一个 transaction 没有传入对应的公钥而造成缺少任何的签名的话,一个 SignaturesMissingException
会被抛出。
We can also choose to simply verify the signatures that are present:
我们也可以选择只是简单地确认一下签名是否提供了:
twiceSignedTx.checkSignaturesAreValid()
twiceSignedTx.checkSignaturesAreValid();
Be very careful, however - this function neither guarantees that the signatures that are present are required, nor checks whether any signatures are missing.
但是要小心,这个方法既不能保证被展示出来的签名是必须要有的,也不能查出是否缺少了任何的签名。
为 transaction 提供签名¶
Once we are satisfied with the contents and existing signatures over the transaction, we add our signature to the
SignedTransaction
to indicate that we approve the transaction.
一旦我们同意了 transaction 的内容以及 transaction 上已经存在的这些签名,我们就可以将自己的签名附加在这个 SignedTransaction
上来说明我们同意了这个 transaction。
We can sign using our legal identity key, as follows:
我们可以使用我们的 legal identity key 来签名:
val twiceSignedTx: SignedTransaction = serviceHub.addSignature(onceSignedTx)
SignedTransaction twiceSignedTx = getServiceHub().addSignature(onceSignedTx);
Or we can choose to sign using another one of our public keys:
或者可以使用我们的其他的公钥来签名:
val twiceSignedTx2: SignedTransaction = serviceHub.addSignature(onceSignedTx, otherIdentity2.owningKey)
SignedTransaction twiceSignedTx2 = getServiceHub().addSignature(onceSignedTx, otherIdentity2.getOwningKey());
We can also generate a signature over the transaction without adding it to the transaction directly.
我们也可以通过 transaction 生成一个签名但是不直接地把它添加到 transaction 中。
We can do this with our legal identity key:
我们可以使用我们的 legal identity key 来实现这个:
val sig: TransactionSignature = serviceHub.createSignature(onceSignedTx)
TransactionSignature sig = getServiceHub().createSignature(onceSignedTx);
Or using another one of our public keys:
或者使用我们的另外的公钥:
val sig2: TransactionSignature = serviceHub.createSignature(onceSignedTx, otherIdentity2.owningKey)
TransactionSignature sig2 = getServiceHub().createSignature(onceSignedTx, otherIdentity2.getOwningKey());
公正 和 记录¶
Notarising and recording a transaction is handled by a built-in flow called FinalityFlow
. See API: Flows for
more details.
公正和记录一个 transaction 是由一个内建的名为 FinalityFlow
的 flow 来处理的。查看 API: Flows 了解详细信息。
API: Flows¶
注解
Before reading this page, you should be familiar with the key concepts of Flows.
注解
在阅读这里之前,你应该已经熟悉了核心概念 Flows。
目录
一个 Flow 的例子¶
Before we discuss the API offered by the flow, let’s consider what a standard flow may look like.
在我们讨论 flow 提供的 API 之前,让我们来想一下一个标准的 flow 应该像什么样子。
Imagine a flow for agreeing a basic ledger update between Alice and Bob. This flow will have two sides:
- An
Initiator
side, that will initiate the request to update the ledger - A
Responder
side, that will respond to the request to update the ledger
我们可以想象一个 Alice 和 Bob 之间同意一个基本的账本更新的 flow。这个 flow 会包含两边:
初始者
的一边,会发起更新账本的请求反馈者
的一边,会对更新账本的请求进行反馈
初始者¶
In our flow, the Initiator flow class will be doing the majority of the work:
在我们的 flow 中, Initiator flow 类将会处理主要的工作:
Part 1 - Build the transaction
- Choose a notary for the transaction
- Create a transaction builder
- Extract any input states from the vault and add them to the builder
- Create any output states and add them to the builder
- Add any commands, attachments and time-window to the builder
*Part1 - 创建 transaction
- 为 transaction 选择一个 notary
- 创建一个 transaction builder
- 提取出所有需要的来自 vault 的 input states 并把他们加入到 builder
- 创建所有需要的 output states 并把他们加入到 builder
- 向 builder 里添加所有需要的 commands,attachment 和 time-window
Part 2 - Sign the transaction
- Sign the transaction builder
- Convert the builder to a signed transaction
Part2 - 为 transaction 提供签名
- 为 transaction builder 提供签名
- 将这个 builder 转换成一个 signed transaction
Part 3 - Verify the transaction
- Verify the transaction by running its contracts
Part3 - 确认 transaction
- 通过执行 transaction 的 contracts 来验证这个 transaction
Part 4 - Gather the counterparty’s signature
- Send the transaction to the counterparty
- Wait to receive back the counterparty’s signature
- Add the counterparty’s signature to the transaction
- Verify the transaction’s signatures
Part4 - 搜集合作方的签名
- 将 transaction 发送给 counterparty
- 等待接收 counterparty 的签名
- 将 counterparty 的签名添加到 transaction
- 验证 transaction 的签名
Part 5 - Finalize the transaction
- Send the transaction to the notary
- Wait to receive back the notarised transaction
- Record the transaction locally
- Store any relevant states in the vault
- Send the transaction to the counterparty for recording
Part5 - 结束 transaction
- 将 transaction 发送给 notary
- 等待接收 notarised transaction 的反馈
- 将 transaction 存储到本地
- 将所有相关的 states 存储到 vault
- 将 transaction 发送到 counterparty 去记录
We can visualize the work performed by initiator as follows:
我们可以用下边的 flow 图来表示这个工作流程:

反馈方¶
To respond to these actions, the responder takes the following steps:
为了对这些动作进行反馈, responder 进行一下步骤的操作:
Part 1 - Sign the transaction
- Receive the transaction from the counterparty
- Verify the transaction’s existing signatures
- Verify the transaction by running its contracts
- Generate a signature over the transaction
- Send the signature back to the counterparty
Part1 - 为 transaction 提供签名
- 从 counterparty 接收 transaction
- 验证 transaction 中已经存在的签名
- 通过执行 transaction 的 contracts 来验证 transaction
- 对该 transaction 生成自己的签名
- 将签名发送回给 counterparty
Part 2 - Record the transaction
- Receive the notarised transaction from the counterparty
- Record the transaction locally
- Store any relevant states in the vault
Part2 - 记录 transaction
- 从 counterparty 那边接收 notarised transaction
- 将 transaction 记录到本地
- 将所有相关的 states 记录到 vault
FlowLogic¶
In practice, a flow is implemented as one or more communicating FlowLogic
subclasses. The FlowLogic
subclass’s constructor can take any number of arguments of any type. The generic of FlowLogic
(e.g.
FlowLogic<SignedTransaction>
) indicates the flow’s return type.
常规来讲,一个 flow 会作为一个或者多个 FlowLogic
子类被实现的。FlowLogic
子类的构造体能够包含任意数量任意类型的参数。通常的 FlowLogic``(比如 ``FlowLogic<SignedTransaction>`
)表明了 flow 的返回类型。
class Initiator(val arg1: Boolean,
val arg2: Int,
val counterparty: Party): FlowLogic<SignedTransaction>() { }
class Responder(val otherParty: Party) : FlowLogic<Unit>() { }
public static class Initiator extends FlowLogic<SignedTransaction> {
private final boolean arg1;
private final int arg2;
private final Party counterparty;
public Initiator(boolean arg1, int arg2, Party counterparty) {
this.arg1 = arg1;
this.arg2 = arg2;
this.counterparty = counterparty;
}
}
public static class Responder extends FlowLogic<Void> { }
FlowLogic 注解¶
Any flow from which you want to initiate other flows must be annotated with the @InitiatingFlow
annotation.
Additionally, if you wish to start the flow via RPC, you must annotate it with the @StartableByRPC
annotation:
任何你想要用来出发另一个 flow 的 flow,必须要用 `@InitiatingFlow
这个 注解来进行标注。并且,如果你希望通过 RPC 来开始一个 flow,你必须使用 @StartableByRPC
这个注解:
@InitiatingFlow
@StartableByRPC
class Initiator(): FlowLogic<Unit>() { }
@InitiatingFlow
@StartableByRPC
public static class Initiator extends FlowLogic<Unit> { }
Meanwhile, any flow that responds to a message from another flow must be annotated with the @InitiatedBy
annotation.
@InitiatedBy
takes the class of the flow it is responding to as its single parameter:
同时,任何一个作为对一个其他 flow 提供反馈的 flow,也必须使用 @InitiatedBy
这个 注解进行标注。@InitiatedBy
会使用它要反馈的 flow 的 class 作为唯一的一个参数:
@InitiatedBy(Initiator::class)
class Responder(val otherSideSession: FlowSession) : FlowLogic<Unit>() { }
@InitiatedBy(Initiator.class)
public static class Responder extends FlowLogic<Void> { }
Additionally, any flow that is started by a SchedulableState
must be annotated with the @SchedulableFlow
annotation.
另外,任何由 SchedulableState
开始的 flow 需要使用 `@SchedulableFlow
这个 注解进行标注。
Call¶
Each FlowLogic
subclass must override FlowLogic.call()
, which describes the actions it will take as part of
the flow. For example, the actions of the initiator’s side of the flow would be defined in Initiator.call
, and the
actions of the responder’s side of the flow would be defined in Responder.call
.
每一个 FlowLogic
子类必须要重写 FlowLogic.call()`
,该方法描述了作为 flow 的一部分要执行怎样的动作。比如,flow 发起方的动作应该在 Initiator.call
中定义,反馈方的动作应该在 Responder.call
中定义。
In order for nodes to be able to run multiple flows concurrently, and to allow flows to survive node upgrades and
restarts, flows need to be checkpointable and serializable to disk. This is achieved by marking FlowLogic.call()
,
as well as any function invoked from within FlowLogic.call()
, with an @Suspendable
annotation.
为了让节点能够同时运行多个 flows,并且能够让 flows 在节点升级或者重启之后依旧可继续接着执行,flows 需要是 checkpointable 并且可以被序列化到磁盘的。这个可以通过将 FlowLogic.call()
和由 FlowLogic.call()
来调用的任何的方法上都带有 `@Suspendable
注解。
class Initiator(val counterparty: Party): FlowLogic<Unit>() {
@Suspendable
override fun call() { }
}
public static class InitiatorFlow extends FlowLogic<Void> {
private final Party counterparty;
public Initiator(Party counterparty) {
this.counterparty = counterparty;
}
@Suspendable
@Override
public Void call() throws FlowException { }
}
ServiceHub¶
Within FlowLogic.call
, the flow developer has access to the node’s ServiceHub
, which provides access to the
various services the node provides. We will use the ServiceHub
extensively in the examples that follow. You can
also see API: ServiceHub for information about the services the ServiceHub
offers.
在 FlowLogic.call
中,flow 开发者可以访问节点的 ServiceHub
,其提供了访问节点所提供的非常多的服务。我们会在例子中非常多的使用 ServiceHub
。你也可以查看 API: ServiceHub 来了解 ServiceHub
都提供了哪些服务。
常规 flow 任务¶
There are a number of common tasks that you will need to perform within FlowLogic.call
in order to agree ledger
updates. This section details the API for common tasks.
在 FlowLogic.call
中你可以使用很多常规的任务来同意一个账本的更新。下边的部分会介绍大部分常用的任务。
构建 transaction¶
The majority of the work performed during a flow will be to build, verify and sign a transaction. This is covered in API: Transactions.
在一个 flow 中主要要执行的工作就是构建、确认一个 transaction 并提供签名。这个可以查看 API: Transactions。
从 vault 中获得 states¶
When building a transaction, you’ll often need to extract the states you wish to consume from the vault. This is covered in API: Vault Query.
当构建一个 transaction 的时候,你经常需要从账本上获得你希望去消费掉的 state。这个可以查看 API: Vault Query。
获得其他节点的信息¶
We can retrieve information about other nodes on the network and the services they offer using
ServiceHub.networkMapCache
.
我们可以使用 ServiceHub.networkMapCache
来获得网络中其他节点的信息,包括提供哪些服务。
Notaries¶
Remember that a transaction generally needs a notary to:
- Prevent double-spends if the transaction has inputs
- Serve as a timestamping authority if the transaction has a time-window
一个 transaction 通常大多需要一个 notary 来:
- 如果 transaction 有 input 的话,需要避免双花
- 如果 transaction 有 time-window 的话,要确保 transaction 只能在指定的 time-window 里被执行
There are several ways to retrieve a notary from the network map:
有很多方法来从 network map 那里获得一个 notary:
val notaryName: CordaX500Name = CordaX500Name(
organisation = "Notary Service",
locality = "London",
country = "GB")
val specificNotary: Party = serviceHub.networkMapCache.getNotary(notaryName)!!
// Alternatively, we can pick an arbitrary notary from the notary
// list. However, it is always preferable to specify the notary
// explicitly, as the notary list might change when new notaries are
// introduced, or old ones decommissioned.
val firstNotary: Party = serviceHub.networkMapCache.notaryIdentities.first()
CordaX500Name notaryName = new CordaX500Name("Notary Service", "London", "GB");
Party specificNotary = getServiceHub().getNetworkMapCache().getNotary(notaryName);
// Alternatively, we can pick an arbitrary notary from the notary
// list. However, it is always preferable to specify the notary
// explicitly, as the notary list might change when new notaries are
// introduced, or old ones decommissioned.
Party firstNotary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
指定 counterparties¶
We can also use the network map to retrieve a specific counterparty:
我们也可以使用 network map 来获取一个指定的 counterparty 的信息:
val counterpartyName: CordaX500Name = CordaX500Name(
organisation = "NodeA",
locality = "London",
country = "GB")
val namedCounterparty: Party = serviceHub.identityService.wellKnownPartyFromX500Name(counterpartyName) ?:
throw IllegalArgumentException("Couldn't find counterparty for NodeA in identity service")
val keyedCounterparty: Party = serviceHub.identityService.partyFromKey(dummyPubKey) ?:
throw IllegalArgumentException("Couldn't find counterparty with key: $dummyPubKey in identity service")
CordaX500Name counterPartyName = new CordaX500Name("NodeA", "London", "GB");
Party namedCounterparty = getServiceHub().getIdentityService().wellKnownPartyFromX500Name(counterPartyName);
Party keyedCounterparty = getServiceHub().getIdentityService().partyFromKey(dummyPubKey);
在 parties 之间进行沟通¶
In order to create a communication session between your initiator flow and the receiver flow you must call
initiateFlow(party: Party): FlowSession
为了在你的 initiator flow 和 receiver flow 之间创建一个沟通 session,你必须要调用 initiateFlow(party: Party): FlowSession
FlowSession
instances in turn provide three functions:
send(payload: Any)
- Sends the
payload
object
- Sends the
receive(receiveType: Class<R>): R
- Receives an object of type
receiveType
- Receives an object of type
sendAndReceive(receiveType: Class<R>, payload: Any): R
- Sends the
payload
object and receives an object of typereceiveType
back
- Sends the
FlowSession
实例提供三个方法:
send(payload: Any)
: 发送payload
对象receive(receiveType: Class<R>): R
: 接收receiveType
类型的对象sendAndReceive(receiveType: Class<R>, payload: Any): R
: 发送payload
对象并且接收receiveType
类型的对象
In addition FlowLogic
provides functions that batch receives:
receiveAllMap(sessions: Map<FlowSession, Class<out Any>>): Map<FlowSession, UntrustworthyData<Any>>
Receives from allFlowSession
objects specified in the passed in map. The received types may differ.receiveAll(receiveType: Class<R>, sessions: List<FlowSession>): List<UntrustworthyData<R>>
Receives from allFlowSession
objects specified in the passed in list. The received types must be the same.
另外,FlowLogic
也提供了批量接收的方法:
receiveAllMap(sessions: Map<FlowSession, Class<out Any>>): Map<FlowSession, UntrustworthyData<Any>>
接收来自于传入的 map 中所有FlowSession
。所接收到的类型可能不同。
receiveAll(receiveType: Class<R>, sessions: List<FlowSession>): List<UntrustworthyData<R>>
接收来自于传入的 list 中所有 ``FlowSession``对象。所接收到的类型必须相同。
The batched functions are implemented more efficiently by the flow framework.
Flow framework 将批量方法实现的很有效率。
InitiateFlow¶
initiateFlow
creates a communication session with the passed in Party
.
initiateFlow 创建了一个同传进来的 Party
的一个沟通 session。
val counterpartySession: FlowSession = initiateFlow(counterparty)
FlowSession counterpartySession = initiateFlow(counterparty);
Note that at the time of call to this function no actual communication is done, this is deferred to the first send/receive, at which point the counterparty will either:
- Ignore the message if they are not registered to respond to messages from this flow.
- Start the flow they have registered to respond to this flow.
注意当调用这个方法的时候,还没有真实的沟通,这个会被推迟到第一次发送/接收的时候,在那个时间点 counterparty 会:
1. 如果他们没有被注册为这个 flow 提供反馈的话,会忽略这个消息 1. 如果他们被注册为针对这个 flow 要提供反馈的话,会开始这个 flow
Send¶
Once we have a FlowSession
object we can send arbitrary data to a counterparty:
一旦我们有了一个 FlowSession
对象的话,我们就可以向 counterparty 发送任何的数据了:
counterpartySession.send(Any())
counterpartySession.send(new Object());
The flow on the other side must eventually reach a corresponding receive
call to get this message.
在另一方的 flow 最终必须要调用一个对应的 receive
来获得这个消息。
Receive¶
We can also wait to receive arbitrary data of a specific type from a counterparty. Again, this implies a corresponding
send
call in the counterparty’s flow. A few scenarios:
- We never receive a message back. In the current design, the flow is paused until the node’s owner kills the flow.
- Instead of sending a message back, the counterparty throws a
FlowException
. This exception is propagated back to us, and we can use the error message to establish what happened. - We receive a message back, but it’s of the wrong type. In this case, a
FlowException
is thrown. - We receive back a message of the correct type. All is good.
我们也可以等待从一个 counterparty 那里接收任何的数据。这就意味着在 counterparty 的 flow 中需要调用对应的 send
方法。以下是几种情况:
- 我们从来没有收到一个返回的消息。在当前的设计中,flow 会被暂停直到节点的 owner 结束了 flow
- counterparty 抛出了一个
FlowException
而不是返回一个消息。这个异常会传回给我们,我们可以通过这个异常来判断发生了什么错误 - 我们收到了返回的消息,但是是一个错误的类型。这个时候,一个
FlowException
异常会被抛出 - 我们收到了一个类型正确的消息,一切正常。
Upon calling receive
(or sendAndReceive
), the FlowLogic
is suspended until it receives a response.
当调用了 receive``(或者 ``sendAndReceive
)方法的时候,FlowLogic
会被挂起直到它收到了一个反馈。
We receive the data wrapped in an UntrustworthyData
instance. This is a reminder that the data we receive may not
be what it appears to be! We must unwrap the UntrustworthyData
using a lambda:
我们收到的数据会被打包在一个 UntrustworthyData
实例中。这提醒了我们我们收到的数据可能并不像它看起来的那样!我们必须要使用 lambda 来将 UntrustworthyData
拆包:
val packet1: UntrustworthyData<Int> = counterpartySession.receive<Int>()
val int: Int = packet1.unwrap { data ->
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
data
}
UntrustworthyData<Integer> packet1 = counterpartySession.receive(Integer.class);
Integer integer = packet1.unwrap(data -> {
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
return data;
});
We’re not limited to sending to and receiving from a single counterparty. A flow can send messages to as many parties as it likes, and each party can invoke a different response flow:
我们也不会限制只能给一个 counterparty 发消息或者只能从一个 counterparty 那里收到消息。一个 flow 可以给任意多的 parties 发送消息,并且每个 party 可以调用不同的 response flow:
val regulatorSession: FlowSession = initiateFlow(regulator)
regulatorSession.send(Any())
val packet3: UntrustworthyData<Any> = regulatorSession.receive<Any>()
FlowSession regulatorSession = initiateFlow(regulator);
regulatorSession.send(new Object());
UntrustworthyData<Object> packet3 = regulatorSession.receive(Object.class);
警告
If you initiate several flows from the same @InitiatingFlow
flow then on the receiving side you must be
prepared to be initiated by any of the corresponding initiateFlow()
calls! A good way of handling this ambiguity
is to send as a first message a “role” message to the initiated flow, indicating which part of the initiating flow
the rest of the counter-flow should conform to. For example send an enum, and on the other side start with a switch
statement.
SendAndReceive¶
We can also use a single call to send data to a counterparty and wait to receive data of a specific type back. The type of data sent doesn’t need to match the type of the data received back:
我们也可以使用一个调用来向 counterparty 发送数据并且等待一个指定类型的返回数据。发送的数据类型不需要必须和收到的返回数据类型一致:
val packet2: UntrustworthyData<Boolean> = counterpartySession.sendAndReceive<Boolean>("You can send and receive any class!")
val boolean: Boolean = packet2.unwrap { data ->
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
data
}
UntrustworthyData<Boolean> packet2 = counterpartySession.sendAndReceive(Boolean.class, "You can send and receive any class!");
Boolean bool = packet2.unwrap(data -> {
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
return data;
});
Counterparty response¶
Suppose we’re now on the Responder
side of the flow. We just received the following series of messages from the
Initiator
:
- They sent us an
Any
instance - They waited to receive an
Integer
instance back - They sent a
String
instance and waited to receive aBoolean
instance back
假设我们现在是在 flow 对应的 Responder
的节点。我们刚刚收到了来自于 Initiator
的下边的一系列消息:
- 他们发送给我们
Any
实例 - 他们正在等待收到一个
Integer
类型的返回实例 - 他们发送了一个
String
的实例并且在等待收到一个Boolean
类型的返回实例
Our side of the flow must mirror these calls. We could do this as follows:
我们这边的 flow 也必须要反映出这样的调用。我们可以:
val any: Any = counterpartySession.receive<Any>().unwrap { data -> data }
val string: String = counterpartySession.sendAndReceive<String>(99).unwrap { data -> data }
counterpartySession.send(true)
Object obj = counterpartySession.receive(Object.class).unwrap(data -> data);
String string = counterpartySession.sendAndReceive(String.class, 99).unwrap(data -> data);
counterpartySession.send(true);
为什么要 Session?¶
Before FlowSession
s were introduced the send/receive API looked a bit different. They were functions on
FlowLogic
and took the address Party
as argument. The platform internally maintained a mapping from Party
to
session, hiding sessions from the user completely.
在 FlowSesion
被引入之前,send/receive API 看起来是有点不同的。他们是在 FlowLogic
上的功能并且是将 Party
作为参数。这个平台在内部会维护一个从 Party
到 session 的 mapping,对用户完全将 session 隐藏起来。
Although this is a convenient API it introduces subtle issues where a message that was originally meant for a specific session may end up in another.
尽管这是一个很方便的 API,但它引入了一些小的问题,就是原来针对于一个指定 session 的消息可能最后跑到了另外一个 session 里。
Consider the following contrived example using the old Party
based API:
下边是使用以前的基于 Party 的 API 的例子:
@InitiatingFlow
class LaunchSpaceshipFlow : FlowLogic<Unit>() {
@Suspendable
override fun call() {
val shouldLaunchSpaceship = receive<Boolean>(getPresident()).unwrap { it }
if (shouldLaunchSpaceship) {
launchSpaceship()
}
}
fun launchSpaceship() {
}
fun getPresident(): Party {
TODO()
}
}
@InitiatedBy(LaunchSpaceshipFlow::class)
@InitiatingFlow
class PresidentSpaceshipFlow(val launcher: Party) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
val needCoffee = true
send(getSecretary(), needCoffee)
val shouldLaunchSpaceship = false
send(launcher, shouldLaunchSpaceship)
}
fun getSecretary(): Party {
TODO()
}
}
@InitiatedBy(PresidentSpaceshipFlow::class)
class SecretaryFlow(val president: Party) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
// ignore
}
}
@InitiatingFlow
class LaunchSpaceshipFlow extends FlowLogic<Void> {
@Suspendable
@Override
public Void call() throws FlowException {
boolean shouldLaunchSpaceship = receive(Boolean.class, getPresident()).unwrap(s -> s);
if (shouldLaunchSpaceship) {
launchSpaceship();
}
return null;
}
public void launchSpaceship() {
}
public Party getPresident() {
throw new AbstractMethodError();
}
}
@InitiatedBy(LaunchSpaceshipFlow.class)
@InitiatingFlow
class PresidentSpaceshipFlow extends FlowLogic<Void> {
private final Party launcher;
public PresidentSpaceshipFlow(Party launcher) {
this.launcher = launcher;
}
@Suspendable
@Override
public Void call() {
boolean needCoffee = true;
send(getSecretary(), needCoffee);
boolean shouldLaunchSpaceship = false;
send(launcher, shouldLaunchSpaceship);
return null;
}
public Party getSecretary() {
throw new AbstractMethodError();
}
}
@InitiatedBy(PresidentSpaceshipFlow.class)
class SecretaryFlow extends FlowLogic<Void> {
private final Party president;
public SecretaryFlow(Party president) {
this.president = president;
}
@Suspendable
@Override
public Void call() {
// ignore
return null;
}
}
The intention of the flows is very clear: LaunchSpaceshipFlow asks the president whether a spaceship should be launched. It is expecting a boolean reply. The president in return first tells the secretary that they need coffee, which is also communicated with a boolean. Afterwards the president replies to the launcher that they don’t want to launch.
这个 Flows 的目的很明确:LaunchSpaceshipFlow 在询问长官是否可以让一个宇宙飞船登陆。它期望着一个 boolean 类型的回复(是或否)。长官的回复首先是告诉秘书他们需要 coffee,这个沟通的内容也是是个 boolean 型的回答。然后长官又回复说他们并不希望飞船降落。
However the above can go horribly wrong when the launcher
happens to be the same party getSecretary
returns. In
this case the boolean meant for the secretary will be received by the launcher!
然而上边的情况在 launcher
和 getsecretary
返回的是同一个 party 的话会变得很糟糕。如果真的发生了的话,那么这个 boolean 就意味着 secretary 会被 launcher 接收到。
This indicates that Party
is not a good identifier for the communication sequence, and indeed the Party
based
API may introduce ways for an attacker to fish for information and even trigger unintended control flow like in the
above case.
这就说明了 Party
对于沟通的顺序来说并不是一个很好的身份标识,并且事实上基于 Party
的 API 也可能会为黑客引入了一个新的方式来钓鱼用户信息甚至像上边说的那样触发一个并不应该的 flow。
Hence we introduced FlowSession
, which identifies the communication sequence. With FlowSession
s the above set
of flows would look like this:
因此我们引入了 FlowSession
,用来标识沟通的顺序。通过 FlowSession
,上边的一系列 flows 会变成下边这样:
@InitiatingFlow
class LaunchSpaceshipFlowCorrect : FlowLogic<Unit>() {
@Suspendable
override fun call() {
val presidentSession = initiateFlow(getPresident())
val shouldLaunchSpaceship = presidentSession.receive<Boolean>().unwrap { it }
if (shouldLaunchSpaceship) {
launchSpaceship()
}
}
fun launchSpaceship() {
}
fun getPresident(): Party {
TODO()
}
}
@InitiatedBy(LaunchSpaceshipFlowCorrect::class)
@InitiatingFlow
class PresidentSpaceshipFlowCorrect(val launcherSession: FlowSession) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
val needCoffee = true
val secretarySession = initiateFlow(getSecretary())
secretarySession.send(needCoffee)
val shouldLaunchSpaceship = false
launcherSession.send(shouldLaunchSpaceship)
}
fun getSecretary(): Party {
TODO()
}
}
@InitiatedBy(PresidentSpaceshipFlowCorrect::class)
class SecretaryFlowCorrect(val presidentSession: FlowSession) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
// ignore
}
}
@InitiatingFlow
class LaunchSpaceshipFlowCorrect extends FlowLogic<Void> {
@Suspendable
@Override
public Void call() throws FlowException {
FlowSession presidentSession = initiateFlow(getPresident());
boolean shouldLaunchSpaceship = presidentSession.receive(Boolean.class).unwrap(s -> s);
if (shouldLaunchSpaceship) {
launchSpaceship();
}
return null;
}
public void launchSpaceship() {
}
public Party getPresident() {
throw new AbstractMethodError();
}
}
@InitiatedBy(LaunchSpaceshipFlowCorrect.class)
@InitiatingFlow
class PresidentSpaceshipFlowCorrect extends FlowLogic<Void> {
private final FlowSession launcherSession;
public PresidentSpaceshipFlowCorrect(FlowSession launcherSession) {
this.launcherSession = launcherSession;
}
@Suspendable
@Override
public Void call() {
boolean needCoffee = true;
FlowSession secretarySession = initiateFlow(getSecretary());
secretarySession.send(needCoffee);
boolean shouldLaunchSpaceship = false;
launcherSession.send(shouldLaunchSpaceship);
return null;
}
public Party getSecretary() {
throw new AbstractMethodError();
}
}
@InitiatedBy(PresidentSpaceshipFlowCorrect.class)
class SecretaryFlowCorrect extends FlowLogic<Void> {
private final FlowSession presidentSession;
public SecretaryFlowCorrect(FlowSession presidentSession) {
this.presidentSession = presidentSession;
}
@Suspendable
@Override
public Void call() {
// ignore
return null;
}
}
Note how the president is now explicit about which session it wants to send to.
注意现在长官是如何显式地说明他想发送个哪一个 session。
从旧的基于 Party 的 API到新的 API 的转换¶
In the old API the first send
or receive
to a Party
was the one kicking off the counter-flow. This is now
explicit in the initiateFlow
function call. To port existing code:
在旧的 API 中,对一个 Party
的第一个 send
或者 receive
会是那个开始 counter-flow 的。这个现在是在调用 initiateFlow
方法中显式地定义的:
send(regulator, Any()) // Old API
// becomes
val session = initiateFlow(regulator)
session.send(Any())
send(regulator, new Object()); // Old API
// becomes
FlowSession session = initiateFlow(regulator);
session.send(new Object());
Subflows¶
Subflows are pieces of reusable flows that may be run by calling FlowLogic.subFlow
. There are two broad categories
of subflows, inlined and initiating ones. The main difference lies in the counter-flow’s starting method, initiating
ones initiate counter-flows automatically, while inlined ones expect some parent counter-flow to run the inlined
counterpart.
Subflows 是一些可能被重用的 flows 并可以通过调用 FlowLogic.subFlow
来运行。这里有两大类的 subflows,inlined 和 initiating 的。主要的不同在于 counter-flow 的开始方法,initiating subflows 会自动地开始一个 counter-flows,然而 inlined subflows 期望由一个父的 counter-flow 来运行 inlined counter-part。
Inlined subflows¶
Inlined subflows inherit their calling flow’s type when initiating a new session with a counterparty. For example, say we have flow A calling an inlined subflow B, which in turn initiates a session with a party. The FlowLogic type used to determine which counter-flow should be kicked off will be A, not B. Note that this means that the other side of this inlined flow must therefore be implemented explicitly in the kicked off flow as well. This may be done by calling a matching inlined counter-flow, or by implementing the other side explicitly in the kicked off parent flow.
Inlined subflows 在和 counterparty 初始一个新的 session 的时候继承了调用他们的 flow 的类型。比如假设我们有一个 flow A 调用了一个 inlined subflow B,这就会同一个 party 初始了一个 session。FlowLogic 类型会被用来判断哪一个 counter-flow 应该被开始,应该是 A 不是 B。这就意味着这个 inlined flow 的另一侧必须也要在 kicked off flow 中被显式地实现。这个可能通过调用一个匹配的 inlined counter-flow 或者在 kicked off 父 flow 中通过显式地实现另一侧来实现。
An example of such a flow is CollectSignaturesFlow
. It has a counter-flow SignTransactionFlow
that isn’t
annotated with InitiatedBy
. This is because both of these flows are inlined; the kick-off relationship will be
defined by the parent flows calling CollectSignaturesFlow
and SignTransactionFlow
.
这样的 flow 的一个例子是 CollectSignaturesFlow
。它有一个 counter-flow SignTransactionFlow
,这个并没有 InitatedBy
的注解。这是因为这两个 flow 都是 inlined;这个 kick-off 关系会被父 flows 通过调用 CollectSignaturesFlow
和 SignTransactionFlow
来定义的。
In the code inlined subflows appear as regular FlowLogic
instances, without either of the @InitiatingFlow
or
@InitiatedBy
annotation.
在代码中,inlined subflows 会作为常规的一个 FlowLogic
的实例,并且没有 `@InitiatingFlow
和 @InitiatedBy
的注解。
注解
Inlined flows aren’t versioned; they inherit their parent flow’s version.
注解
Inlined flows 并没有自己的版本,他们会继承他们父 flows 的版本。
Initiating subflows¶
Initiating subflows are ones annotated with the @InitiatingFlow
annotation. When such a flow initiates a session its
type will be used to determine which @InitiatedBy
flow to kick off on the counterparty.
Initiating subflows 是这些带有 @InitiatingFlow
注解的 subflows。当这样的 flow 初始了一个 session 的时候,它的类型会被用来确定哪一个 @InitiatedBy
的flow 会在对方那里被开始。
An example is the @InitiatingFlow InitiatorFlow
/@InitiatedBy ResponderFlow
flow pair in the FlowCookbook
.
一个例子就是 FlowCookbook
中的 @InitiatingFlow InitiatorFlow
/@InitiatedBy ResponderFlow
flow 对。
注解
Initiating flows are versioned separately from their parents.
注解
Initiating flows 有自己的版本,跟它的父 flows 是分开的。
注解
The only exception to this rule is FinalityFlow
which is annotated with @InitiatingFlow
but is an inlined flow. This flow
was previously initiating and the annotation exists to maintain backwards compatibility with old code.
注解
这个规则的唯一一个例外是 FinalityFlow
,它是带有 @InitiatingFlow
注解的,但是它是一个 inlined flow。这个 flow 是之前被初始化的,并且这个注解的存在是为了维护跟旧代码的兼容性。
核心 initiating subflows¶
Corda-provided initiating subflows are a little different to standard ones as they are versioned together with the
platform, and their initiated counter-flows are registered explicitly, so there is no need for the InitiatedBy
annotation.
Corda 提供的 initiating subflows 针对于标准的 subflows 有一点点不同,就是他们是跟着平台的版本在一起的,并且他们初始的 counter-flows 是被显式地注册的,所以就不需要有 InitiatedBy
这个注解了。
Flows 类库¶
Corda installs four initiating subflow pairs on each node by default:
NotaryChangeFlow
/NotaryChangeHandler
, which should be used to change a state’s notaryContractUpgradeFlow.Initiate
/ContractUpgradeHandler
, which should be used to change a state’s contractSwapIdentitiesFlow
/SwapIdentitiesHandler
, which is used to exchange confidential identities with a counterparty
Corda 在每个节点中会默认安装 4 个 initiating subflow:
NotaryChangeFlow
/NotaryChangeHandler
,用来变更一个 state 的 notaryContractUpgradeFlow.Initiate
/ContractUpgradeHandler
, 用来变更 state 的 contractSwapIdentitiesFlow
/SwapIdentitiesHandler
, 用来交换一个 counterparty 的 confidential identities
警告
SwapIdentitiesFlow
/SwapIdentitiesHandler
are only installed if the confidential-identities
module
is included. The confidential-identities
module is still not stabilised, so the
SwapIdentitiesFlow
/SwapIdentitiesHandler
API may change in future releases. See Corda API.
警告
SwapIdentitiesFlow
/SwapIdentitiesHandler
只会在包含了 confidential-identities
模块的时候会被安装。confidential-identities
模块现在还不是稳定版本,所以 SwapIdentitiesFlow
/SwapIdentitiesHandler
API 模块在之后的 release 中可能会有变更。查看 Corda API。
Corda also provides a number of built-in inlined subflows that should be used for handling common tasks. The most important are:
FinalityFlow
which is used to notarise, record locally and then broadcast a signed transaction to its participants and any extra parties.ReceiveFinalityFlow
to receive these notarised transactions from theFinalityFlow
sender and record locally.CollectSignaturesFlow
, which should be used to collect a transaction’s required signaturesSendTransactionFlow
, which should be used to send a signed transaction if it needed to be resolved on the other side.ReceiveTransactionFlow
, which should be used receive a signed transaction
Corda 提供了很多内置的 flows 用来处理常见的任务。比较重要的有:
FinalityFlow
,用来公正(notarise)和记录 transaction 并且将一个签过名的 transaction 广播给它的所有参与者以及任何额外的 partiesReceiveFinalityFlow
,用来接收来自于FinalityFlow
的发送方的已经被公证过的 transaction 并且存储到本地CollectSignaturesFlow
,用来搜集一个 transaction 所要求的签名SendTransactionFlow
,用来发送一个签了名的 transaction,如果这个 transaction 需要自另一方去处理的话ReceiveTransactionFlow
,用来接收一个已经被签名了的 transaction
Let’s look at some of these flows in more detail.
我们来看这些常见的 subflow 例子。
FinalityFlow¶
FinalityFlow
allows us to notarise the transaction and get it recorded in the vault of the participants of all
the transaction’s states:
FinalityFlow
允许我们来公证一个 transaction 并且让所有参与者都可以将 transaction 的 states 记录到账本中:
val notarisedTx1: SignedTransaction = subFlow(FinalityFlow(fullySignedTx, listOf(counterpartySession), FINALISATION.childProgressTracker()))
SignedTransaction notarisedTx1 = subFlow(new FinalityFlow(fullySignedTx, singleton(counterpartySession), FINALISATION.childProgressTracker()));
We can also choose to send the transaction to additional parties who aren’t one of the state’s participants:
我们也可以将 transaction 发送给额外的 parties 即使他们不是 state 的参与者:
val partySessions: List<FlowSession> = listOf(counterpartySession, initiateFlow(regulator))
val notarisedTx2: SignedTransaction = subFlow(FinalityFlow(fullySignedTx, partySessions, FINALISATION.childProgressTracker()))
List<FlowSession> partySessions = Arrays.asList(counterpartySession, initiateFlow(regulator));
SignedTransaction notarisedTx2 = subFlow(new FinalityFlow(fullySignedTx, partySessions, FINALISATION.childProgressTracker()));
Only one party has to call FinalityFlow
for a given transaction to be recorded by all participants. It must not
be called by every participant. Instead, every other particpant must call ReceiveFinalityFlow
in their responder
flow to receive the transaction:
对于一个 transaction 仅仅需要一方来调用 FinalityFlow
来让所有的参与者记录它。这 不需要 每一方分别自己去调用。每个其他的参与方 必须 在他们的 responder flow 中调用 ReceiveFinalityFlow
来接收交易:
subFlow(ReceiveFinalityFlow(counterpartySession, expectedTxId = idOfTxWeSigned))
subFlow(new ReceiveFinalityFlow(counterpartySession, idOfTxWeSigned));
idOfTxWeSigned
is an optional parameter used to confirm that we got the right transaction. It comes from using SignTransactionFlow
which is described below.
idOfTxWeSigned
是一个可选的参数可以用来确认我们得到了一个正确的交易。它是使用从下边描述的 SignTransactionFlow
得到的。
错误处理行为
Once a transaction has been notarised and its input states consumed by the flow initiator (eg. sender), should the participant(s) receiving the transaction fail to verify it, or the receiving flow (the finality handler) fails due to some other error, we then have a scenario where not all parties have the correct up to date view of the ledger (a condition where eventual consistency between participants takes longer than is normally the case under Corda’s eventual consistency model). To recover from this scenario, the receiver’s finality handler will automatically be sent to the Flow Hospital where it’s suspended and retried from its last checkpoint upon node restart, or according to other conditional retry rules explained in flow hospital runtime behaviour. This gives the node operator the opportunity to recover from the error. Until the issue is resolved the node will continue to retry the flow on each startup. Upon successful completion by the receiver’s finality flow, the ledger will become fully consistent once again.
当一笔交易被证明并且 flow initiator(比如 sender)也消费了它的 states 之后,如果参与方接收到了交易验证没通过,或者由于一些其他的错误,接收的 flow(finality 处理)失败了的话,那么就会出现不是所有的参与方都有一个正确的最新的账本的视图(在 Corda 的 最终一致性模型 下,在这种条件下载参与方之间的最终一致性要比常规的花费更长的时间)。为了能够从这个场景中恢复,接收方的 finality handler 会被自动地发送到 Flow Hospital,在那里它会被挂起并且在节点重启或者根据在 flow hospital runtime behaviour 中解释的其他条件下的重试规则会尝试在它的最后一个 checkpoint 那里重试。这就给了节点的维护者机会来从错误中恢复。节点会在每次重启的时候不断的重试这个 flow 直到问题被解决。一旦接收方的 finality flow 成功结束了,那么账本将会变得再次完全一致。
警告
It’s possible to forcibly terminate the erroring finality handler using the killFlow
RPC but at the risk of an inconsistent view of the ledger.
警告
使用 killFlow
RPC 来强制结束错误的 finality handler 是可以的,但是会造成账本的不一致的视图。
注解
A future release will allow retrying hospitalised flows without restarting the node, i.e. via RPC.
注解
之后的 release 会允许不需要重启节点就能够重试有问题的 flows,比如通过 RPC。
CollectSignaturesFlow/SignTransactionFlow¶
The list of parties who need to sign a transaction is dictated by the transaction’s commands. Once we’ve signed a
transaction ourselves, we can automatically gather the signatures of the other required signers using
CollectSignaturesFlow
:
都要由哪些 parties 来为 transaction 提供签名是在 transaction 的 commands 中定义的。一旦我们为 transaction 提供了自己的签名,我们可以使用 CollectSignaturesFlow
来搜集其他必须提供签名的 parties 的签名:
val fullySignedTx: SignedTransaction = subFlow(CollectSignaturesFlow(twiceSignedTx, setOf(counterpartySession, regulatorSession), SIGS_GATHERING.childProgressTracker()))
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(twiceSignedTx, emptySet(), SIGS_GATHERING.childProgressTracker()));
Each required signer will need to respond by invoking its own SignTransactionFlow
subclass to check the
transaction (by implementing the checkTransaction
method) and provide their signature if they are satisfied:
每一个要求提供签名的 party 需要调用他们自己的 SignTransactionFlow
子类来检查 transaction(通过实现 checkTransaction
方法) 并且在满足要求后提供自己的签名:
val signTransactionFlow: SignTransactionFlow = object : SignTransactionFlow(counterpartySession) {
override fun checkTransaction(stx: SignedTransaction) = requireThat {
// Any additional checking we see fit...
val outputState = stx.tx.outputsOfType<DummyState>().single()
require(outputState.magicNumber == 777)
}
}
val idOfTxWeSigned = subFlow(signTransactionFlow).id
class SignTxFlow extends SignTransactionFlow {
private SignTxFlow(FlowSession otherSession, ProgressTracker progressTracker) {
super(otherSession, progressTracker);
}
@Override
protected void checkTransaction(SignedTransaction stx) {
requireThat(require -> {
// Any additional checking we see fit...
DummyState outputState = (DummyState) stx.getTx().getOutputs().get(0).getData();
checkArgument(outputState.getMagicNumber() == 777);
return null;
});
}
}
SecureHash idOfTxWeSigned = subFlow(new SignTxFlow(counterpartySession, SignTransactionFlow.tracker())).getId();
Types of things to check include:
- Ensuring that the transaction received is the expected type, i.e. has the expected type of inputs and outputs
- Checking that the properties of the outputs are expected, this is in the absence of integrating reference data sources to facilitate this
- Checking that the transaction is not incorrectly spending (perhaps maliciously) asset states, as potentially the transaction creator has access to some of signer’s state references
需要检查的事情包括:
- 确保接收到的 transaction 是期待的类型,比如是否具有期待类型的 inputs 和 outputs
- 检查 outputs 的属性是不是正确,这是因为没有继承引用的数据源来协调
- 检查交易没有错误地消费(可能是恶意的) asset states,因为很有可能交易的创建者能够访问一些签名者的 state references
SendTransactionFlow/ReceiveTransactionFlow¶
Verifying a transaction received from a counterparty also requires verification of every transaction in its
dependency chain. This means the receiving party needs to be able to ask the sender all the details of the chain.
The sender will use SendTransactionFlow
for sending the transaction and then for processing all subsequent
transaction data vending requests as the receiver walks the dependency chain using ReceiveTransactionFlow
:
验证一个从 counterparty 发送来的 transaction 也需要验证 transaction 依赖链(dependency chain)上的每一个 transaction。这就意味着接收方需要能够向发送方要求这个依赖链的所有详细内容。发送方就可以使用 SendTransactionFlow
来发送 transaction,接收方就可以通过使用 ReceiveTransactionFlow
来查看所有依赖链的内容:
subFlow(SendTransactionFlow(counterpartySession, twiceSignedTx))
// Optional request verification to further restrict data access.
subFlow(object : SendTransactionFlow(counterpartySession, twiceSignedTx) {
override fun verifyDataRequest(dataRequest: FetchDataFlow.Request.Data) {
// Extra request verification.
}
})
subFlow(new SendTransactionFlow(counterpartySession, twiceSignedTx));
// Optional request verification to further restrict data access.
subFlow(new SendTransactionFlow(counterpartySession, twiceSignedTx) {
@Override
protected void verifyDataRequest(@NotNull FetchDataFlow.Request.Data dataRequest) {
// Extra request verification.
}
});
We can receive the transaction using ReceiveTransactionFlow
, which will automatically download all the
dependencies and verify the transaction:
我们可以使用 ReceiveTransactionFlow
来接收 transaction,这会自动地下载所有的依赖并且确认 transaction:
val verifiedTransaction = subFlow(ReceiveTransactionFlow(counterpartySession))
SignedTransaction verifiedTransaction = subFlow(new ReceiveTransactionFlow(counterpartySession));
We can also send and receive a StateAndRef
dependency chain and automatically resolve its dependencies:
我们也可以发送和接收一个 StateAndRef
依赖链并且自动解决了它的依赖:
subFlow(SendStateAndRefFlow(counterpartySession, dummyStates))
// On the receive side ...
val resolvedStateAndRef = subFlow(ReceiveStateAndRefFlow<DummyState>(counterpartySession))
subFlow(new SendStateAndRefFlow(counterpartySession, dummyStates));
// On the receive side ...
List<StateAndRef<DummyState>> resolvedStateAndRef = subFlow(new ReceiveStateAndRefFlow<>(counterpartySession));
为什么要用 inlined subflows?¶
Inlined subflows provide a way to share commonly used flow code while forcing users to create a parent flow. Take for
example CollectSignaturesFlow
. Say we made it an initiating flow that automatically kicks off
SignTransactionFlow
that signs the transaction. This would mean malicious nodes can just send any old transaction to
us using CollectSignaturesFlow
and we would automatically sign it!
Inlined subflows 提供了一种分享常用的 flow code 的方式,这种方式要求 用户必须要创建一个父的 flow。比如 CollectSignaturesFlow
这个例子。假设我们创建了一个 initiating flow 来自动开始一个 SignTransactionFlow
来为 transaction 提供签名。这意味着恶意的节点能够通过使用 CollectSignaturesFlow
只需向我们发送任何一个旧的 transaction,然后我们就会自动地为其提供签名!
By making this pair of flows inlined we provide control to the user over whether to sign the transaction or not by forcing them to nest it in their own parent flows.
为了使这对 flows 在同一个等级范围,我们通过强制用户将这个 flow 嵌套到他们自己的父 flows 中的方式来允许用户决定他们是否要为这个 transaction 提供签名。
In general if you’re writing a subflow the decision of whether you should make it initiating should depend on whether the counter-flow needs broader context to achieve its goal.
总体上来说,如果你在写一个 flow 的话,你是否应该将其定义为一个 initiating flow 应该基于 counter-flow 是否需要更广泛的上下文来达到它的目标。
FlowException¶
Suppose a node throws an exception while running a flow. Any counterparty flows waiting for a message from the node
(i.e. as part of a call to receive
or sendAndReceive
) will be notified that the flow has unexpectedly
ended and will themselves end. However, the exception thrown will not be propagated back to the counterparties.
假设一个节点在运行 flow 的时候抛出了一个异常。其他任何在等待该节点返回信息的节点(比如作为调用 receive
或者 sendAndReceive
的一部分)会被提示该 flow 异常终止并且自我结束。然而抛出的异常不会被发回到 counterparties。
If you wish to notify any waiting counterparties of the cause of the exception, you can do so by throwing a
FlowException
:
如果你想告知任何等待的 counterparties 异常的原因的话,你可以通过抛出一个 FlowException
来实现:
/**
* Exception which can be thrown by a [FlowLogic] at any point in its logic to unexpectedly bring it to a permanent end.
* The exception will propagate to all counterparty flows and will be thrown on their end the next time they wait on a
* [FlowSession.receive] or [FlowSession.sendAndReceive]. Any flow which no longer needs to do a receive, or has already
* ended, will not receive the exception (if this is required then have them wait for a confirmation message).
*
* If the *rethrown* [FlowException] is uncaught in counterparty flows and propagation triggers then the exception is
* downgraded to an [UnexpectedFlowEndException]. This means only immediate counterparty flows will receive information
* about what the exception was.
*
* [FlowException] (or a subclass) can be a valid expected response from a flow, particularly ones which act as a service.
* It is recommended a [FlowLogic] document the [FlowException] types it can throw.
*
* @property originalErrorId the ID backing [getErrorId]. If null it will be set dynamically by the flow framework when
* the exception is handled. This ID is propagated to counterparty flows, even when the [FlowException] is
* downgraded to an [UnexpectedFlowEndException]. This is so the error conditions may be correlated later on.
*/
open class FlowException(message: String?, cause: Throwable?, var originalErrorId: Long? = null) :
CordaException(message, cause), IdentifiableException {
constructor(message: String?, cause: Throwable?) : this(message, cause, null)
constructor(message: String?) : this(message, null)
constructor(cause: Throwable?) : this(cause?.toString(), cause)
constructor() : this(null, null)
override fun getErrorId(): Long? = originalErrorId
}
The flow framework will automatically propagate the FlowException
back to the waiting counterparties.
Flow framework 会自动地将这个 FlowException
返回给等待的 counterparties。
There are many scenarios in which throwing a FlowException
would be appropriate:
- A transaction doesn’t
verify()
- A transaction’s signatures are invalid
- The transaction does not match the parameters of the deal as discussed
- You are reneging on a deal
以下的情况是适合返回一个 FlowException
的:
- 没有
verify()
方法的 transaction - 一个 transaction 的签名是无效的
- Transaction 跟讨论的交易参数不匹配
- 交易违规
ProgressTracker¶
We can give our flow a progress tracker. This allows us to see the flow’s progress visually in our node’s CRaSH shell.
我们可以给我们的 flow 一个进度跟踪器。这个使我们能够在我们节点的 CRaSH shell 中看到 flow 的进展。
To provide a progress tracker, we have to override FlowLogic.progressTracker
in our flow:
为了提供一个 progress tracker,我们需要在我们的 flow 中重写 FlowLogic.progressTracker
:
companion object {
object ID_OTHER_NODES : Step("Identifying other nodes on the network.")
object SENDING_AND_RECEIVING_DATA : Step("Sending data between parties.")
object EXTRACTING_VAULT_STATES : Step("Extracting states from the vault.")
object OTHER_TX_COMPONENTS : Step("Gathering a transaction's other components.")
object TX_BUILDING : Step("Building a transaction.")
object TX_SIGNING : Step("Signing a transaction.")
object TX_VERIFICATION : Step("Verifying a transaction.")
object SIGS_GATHERING : Step("Gathering a transaction's signatures.") {
// Wiring up a child progress tracker allows us to see the
// subflow's progress steps in our flow's progress tracker.
override fun childProgressTracker() = CollectSignaturesFlow.tracker()
}
object VERIFYING_SIGS : Step("Verifying a transaction's signatures.")
object FINALISATION : Step("Finalising a transaction.") {
override fun childProgressTracker() = FinalityFlow.tracker()
}
fun tracker() = ProgressTracker(
ID_OTHER_NODES,
SENDING_AND_RECEIVING_DATA,
EXTRACTING_VAULT_STATES,
OTHER_TX_COMPONENTS,
TX_BUILDING,
TX_SIGNING,
TX_VERIFICATION,
SIGS_GATHERING,
VERIFYING_SIGS,
FINALISATION
)
}
private static final Step ID_OTHER_NODES = new Step("Identifying other nodes on the network.");
private static final Step SENDING_AND_RECEIVING_DATA = new Step("Sending data between parties.");
private static final Step EXTRACTING_VAULT_STATES = new Step("Extracting states from the vault.");
private static final Step OTHER_TX_COMPONENTS = new Step("Gathering a transaction's other components.");
private static final Step TX_BUILDING = new Step("Building a transaction.");
private static final Step TX_SIGNING = new Step("Signing a transaction.");
private static final Step TX_VERIFICATION = new Step("Verifying a transaction.");
private static final Step SIGS_GATHERING = new Step("Gathering a transaction's signatures.") {
// Wiring up a child progress tracker allows us to see the
// subflow's progress steps in our flow's progress tracker.
@Override
public ProgressTracker childProgressTracker() {
return CollectSignaturesFlow.tracker();
}
};
private static final Step VERIFYING_SIGS = new Step("Verifying a transaction's signatures.");
private static final Step FINALISATION = new Step("Finalising a transaction.") {
@Override
public ProgressTracker childProgressTracker() {
return FinalityFlow.tracker();
}
};
private final ProgressTracker progressTracker = new ProgressTracker(
ID_OTHER_NODES,
SENDING_AND_RECEIVING_DATA,
EXTRACTING_VAULT_STATES,
OTHER_TX_COMPONENTS,
TX_BUILDING,
TX_SIGNING,
TX_VERIFICATION,
SIGS_GATHERING,
FINALISATION
);
We then update the progress tracker’s current step as we progress through the flow as follows:
然后我们就可以按照下边的方式来根据 flow 的进展来更新 progress tracker 的当前步骤:
progressTracker.currentStep = ID_OTHER_NODES
progressTracker.setCurrentStep(ID_OTHER_NODES);
HTTP and 数据库调用¶
HTTP, database and other calls to external resources are allowed in flows. However, their support is currently limited:
- The call must be executed in a BLOCKING way. Flows don’t currently support suspending to await the response to a call to an external resource
- For this reason, the call should be provided with a timeout to prevent the flow from suspending forever. If the timeout elapses, this should be treated as a soft failure and handled by the flow’s business logic
- The call must be idempotent. If the flow fails and has to restart from a checkpoint, the call will also be replayed
HTTP、数据库和其他对于外部资源的调用在 flow 中是允许的。然而,对于这些的支持现在是有限的:
- 这个调用必须要以一种 阻塞 的方式来执行。Flows 当前还不支持挂起并等待对于外部资源调用的反馈
- 因此,这个调用应该提供一个超时来避免 flow 会被永远挂起。如果达到超时的时间,这个应该被触发一个 soft failure 并被 flow 的业务逻辑来处理
- 这个调用必须是密等的。如果这个 flow 失败了并且不得不从某个 checkpoint 重启的话,那么这次调用也会被重新执行
并发,锁和等待¶
Corda is designed to:
- run many flows in parallel
- persist flows to storage and resurrect those flows much later
- (in the future) migrate flows between JVMs
Corda 是被设计用来:
- 同时运行多个 flows
- 可能会将 flows 持久化到 storage 并在稍后恢复这些 flows
- (在将来)在 JVMs 之间迁移 flows
Because of this, care must be taken when performing locking or waiting operations.
因此,在执行锁或者等待的操作的时候必须要小心。
锁¶
Flows should avoid using locks or interacting with objects that are shared between flows (except for ServiceHub
and other
carefully crafted services such as Oracles. See Writing oracle services). Locks will significantly reduce the scalability of the
node, and can cause the node to deadlock if they remain locked across flow context switch boundaries (such as when sending
and receiving from peers, as discussed above, or sleeping, as discussed below).
Flows 应该避免使用锁,甚至通常也不应该尝试同 flows 之间共享的对象来进行交互(除了 ServiceHub
和其他仔细地设计过的服务,查看 Writing oracle services)。锁会很大程度上减弱节点的可扩展性,并且如果他们在 flow 上下文的转换间(比如像上边讨论的那样当发送和从 peer 那里接收,或者想下边讨论的休眠)依旧保持被锁的状态的话,还会造成节点的死锁。
等待¶
A flow can wait until a specific transaction has been received and verified by the node using FlowLogic.waitForLedgerCommit.
Outside of this, scheduling an activity to occur at some future time should be achieved using SchedulableState
.
一个 flow 能够等待直到一个特定的交易被收到并且通过了由节点使用 FlowLogic.waitForLedgerCommit 进行的验证。除此之外,在将来的某个时间预约一个动作会发生也可以通过使用 SchedulableState
来实现。
However, if there is a need for brief pauses in flows, you have the option of using FlowLogic.sleep
in place of where you
might have used Thread.sleep
. Flows should expressly not use Thread.sleep
, since this will prevent the node from
processing other flows in the meantime, significantly impairing the performance of the node.
然而,如果需要在 flows 中停止一段时间,你可以在你已经使用 Thread.sleep
的地方使用 FlowLogic.sleep
。Flows 很明显应该不使用 Thread.sleep
,因为这会组织节点在同一时间处理其他的 flows,这会严重地影响节点的效率。
Even FlowLogic.sleep
should not be used to create long running flows or as a substitute to using the SchedulableState
scheduler, since the Corda ethos is for short-lived flows (long-lived flows make upgrading nodes or CorDapps much more
complicated).
甚至 FlowLogic.sleep
也不应该被用来创建一个长时间运行的 flows 或者作为使用 SchedulableState
scheduler 的后续操作,因为 Corda 的精神是为了短生命的 flows(长时间运行的 flows 会将升级节点或 CorDapps 变得更复杂)。
For example, the finance
package currently uses FlowLogic.sleep
to make several attempts at coin selection when
many states are soft locked, to wait for states to become unlocked:
比如,finance
包当前使用 ``FlowLogic.sleep``来进行不同的尝试来进行 coin 的选择,当有多个 states 被 soft locked,来等待其他的新的 states 变成了未被锁的状态。
for (retryCount in 1..maxRetries) { if (!attemptSpend(services, amount, lockId, notary, onlyFromIssuerParties, withIssuerRefs, stateAndRefs)) { log.warn("Coin selection failed on attempt $retryCount") // TODO: revisit the back off strategy for contended spending. if (retryCount != maxRetries) { stateAndRefs.clear() val durationMillis = (minOf(retrySleep.shl(retryCount), retryCap / 2) * (1.0 + Math.random())).toInt() FlowLogic.sleep(durationMillis.millis) } else { log.warn("Insufficient spendable states identified for $amount") } } else { break } }
API: Identity¶
Party¶
Parties on the network are represented using the AbstractParty
class. There are two types of AbstractParty
:
Party
, identified by aPublicKey
and aCordaX500Name
AnonymousParty
, identified by aPublicKey
only
Corda 网络中的 parties 是通过使用 AbstractParty
类来代表的。主要有两种类型的 AbstractParty
:
Party
,通过一个PublicKey
和CordaX500Name
来识别AnonymousParty
,只能通过一个PublicKey
来识别
Using AnonymousParty
to identify parties in states and commands prevents nodes from learning the identities
of the parties involved in a transaction when they verify the transaction’s dependency chain. When preserving the
anonymity of each party is not required (e.g. for internal processing), Party
can be used instead.
使用 AnonymousParty
在 states 和 commands 中识别 parties 可以避免节点在确认 transaction 的依赖链(dependency chain)的时候知道一个 transaction 涉及到的 parties 的身份。为每一个 party 保留匿名性不是必须的(比如对于一个内部的流程),Party
可以被使用。
The identity service allows flows to resolve AnonymousParty
to Party
, but only if the anonymous party’s
identity has already been registered with the node (typically handled by SwapIdentitiesFlow
or
IdentitySyncFlow
, discussed below).
Identity service 允许 flows 将 AnonymousParty
变为 Party
,但是仅仅当这个匿名的 party 的 identity 已经在这个节点中注册过了(通常是被下边要介绍的 SwapIdentitiesFlow
或者 IdentitySyncFlow
来处理的)。
Party names use the CordaX500Name
data class, which enforces the structure of names within Corda, as well as
ensuring a consistent rendering of the names in plain text.
Party 的名字使用的是 CordaX500Name
数据类,这个强制了在 Corda 中的名字的结构,同时也确保了当名字转变为纯文本格式时的一致性。
Support for both Party
and AnonymousParty
classes in Corda enables sophisticated selective disclosure of
identity information. For example, it is possible to construct a transaction using an AnonymousParty
(so nobody can
learn of your involvement by inspection of the transaction), yet prove to specific counterparts that this
AnonymousParty
actually corresponds to your well-known identity. This is achieved using the
PartyAndCertificate
data class, which contains the X.509 certificate path proving that a given AnonymousParty
corresponds to a given Party
. Each PartyAndCertificate
can be propagated to counterparties on a need-to-know
basis.
在 Corda 中同时支持 Party
和 AnonymousParty
使精确地有选择地暴露身份信息成为可能。例如,可以使用一个 AnonymousParty
来构建一个 transaction(所以没有人能够通过拦截这个 transaction 来知道你已经加入了这个 transaction),但是你还是可以指定对于哪个 counterparts 这个 AnonymousParty
实际对应的是哪个 well-known 的身份。这是通过使用 PartyAndCertificate
数据类来实现的,其包含了 X.509 证书的路径,可以用来证明一个给定的 AnonymousParty
对应的是哪个给定的 Party
。每个 PartyAndCertificate
可以基于 need-to-know 的原则发送给相关的 counterparties。
The PartyAndCertificate
class is also used by the network map service to represent well-known identities, with the
certificate path proving the certificate was issued by the doorman service.
PartyAndCertificate
类同样也会被 network map service 使用来代表 well-known identities,包括证书的路径,以此来证明证书是由 doorman service 发行的。
Confidential identities¶
警告
The confidential-identities
module is still not stabilised, so this API may change in future releases.
See Corda API.
警告
confidential-identities
模块目前还不是稳定的,所以这个 API 可能在以后会改动。查看 Corda API。
Confidential identities are key pairs where the corresponding X.509 certificate (and path) are not made public, so that
parties who are not involved in the transaction cannot identify the owner. They are owned by a well-known identity,
which must sign the X.509 certificate. Before constructing a new transaction the involved parties must generate and
exchange new confidential identities, a process which is managed using SwapIdentitiesFlow
(discussed below). The
public keys of these confidential identities are then used when generating output states and commands for the
transaction.
Confidential identities 是一个秘钥对,所对应的 X.509 证书(和路径)并不是公开的,所以 transaction 中没有引入的 parties 是没法识别出这个代表的是谁。这个秘钥对是由一个 well-known identity 持有的,它必须要为 X.509 证书签名。当构建一个新的 transaction 之前,被引入的 parties 必须生成并交换新的 confidential identities,这个过程通过使用 SwapIdentitiesFlow
来进行管理。当为 transaction 生成 output states 和 commands 的时候,这些 confidential identities 的公钥到时会被使用。
Where using outputs from a previous transaction in a new transaction, counterparties may need to know who the involved
parties are. One example is the TwoPartyTradeFlow
, where an existing asset is exchanged for cash. If confidential
identities are being used, the buyer will want to ensure that the asset being transferred is owned by the seller, and
the seller will likewise want to ensure that the cash being transferred is owned by the buyer. Verifying this requires
both nodes to have a copy of the confidential identities for the asset and cash input states. IdentitySyncFlow
manages this process. It takes as inputs a transaction and a counterparty, and for every confidential identity involved
in that transaction for which the calling node holds the certificate path, it sends this certificate path to the
counterparty.
当在一个新的 transaction 中使用前一个 transaction 产生的 outputs 的时候,counterparties 可能需要知道都哪些 parties 被引入了。一个例子就是 TwoPartyTradeFlow
,其中将一个已经存在的 asset 交换为现金。如果 confidential identities 被使用的话,购买者需要确保要交换的这个 asset 确实是由出售方所有的,出售方也可能想要确保要交换的这笔现金确实是由购买方所有的。为了确认这些,两个节点都需要留有这个 asset 和现金的 input state 的 confidential identities 的副本。IdentitySyncFlow
管理了这个过程。它带有一个 transaction 的 inputs 和一个 counterparty,并且针对于每一个该 transaction 所引入的 confidential identity,调用的节点持有相关的证书的路径,那么调用节点需要将这个证书路径发送给 counterparty。
SwapIdentitiesFlow¶
SwapIdentitiesFlow
is typically run as a subflow of another flow. It takes as its sole constructor argument the
counterparty we want to exchange confidential identities with. It returns a mapping from the identities of the caller
and the counterparty to their new confidential identities. In the future, this flow will be extended to handle swapping
identities with multiple parties at once.
SwapIdentitiesFlow
通常会作为一个 flow 的一个 subflow 来运行。它只带有唯一的一个构造参数那就是我们想要交换 confidential identities 的 counterparty。它会返回一个调用方的 identities 和对应于他们新的 confidential identities 的 counterparty 的 mapping。在未来,这个 flow 会被执行用来处理同时与多个 parties 之间交换 identities。
You can see an example of using SwapIdentitiesFlow
in TwoPartyDealFlow.kt
:
你可以在 TwoPartyDealFlow.kt
中查看 SwapIdentitiesFlow
的例子:
@Suspendable
override fun call(): SignedTransaction {
progressTracker.currentStep = GENERATING_ID
val txIdentities = subFlow(SwapIdentitiesFlow(otherSideSession))
val anonymousMe = txIdentities[ourIdentity]!!
val anonymousCounterparty = txIdentities[otherSideSession.counterparty]!!
SwapIdentitiesFlow
goes through the following key steps:
- Generate a new confidential identity from our well-known identity
- Create a
CertificateOwnershipAssertion
object containing the new confidential identity (X500 name, public key) - Sign this object with the confidential identity’s private key
- Send the confidential identity and aforementioned signature to counterparties, while receiving theirs
- Verify the signatures to ensure that identities were generated by the involved set of parties
- Verify the confidential identities are owned by the expected well known identities
- Store the confidential identities and return them to the calling flow
SwapIdentitiesFlow
会经过以下几个主要步骤:
- 从我们的 well-known identity 生成一个新的 confidential identity
- 创建一个
CertificateOwnershipAssertion
对象,该对象包含这个新生成的 confidential identity (X500 名字,公钥) - 使用 confidential identity 的私钥来给这个对象提供签名
- 将 confidential identity 和前面所说的签名发送给所有的 counterparties,同时接收他们的信息
- 确认签名来确保这些 identities 是由引入的这一些列 parties 所生成的
- 确认这些 confidential identities 是由期望的 well-known identities 所有
- 存储这些 confidential identities 并且将他们返回给调用 flow
This ensures not only that the confidential identity X.509 certificates are signed by the correct well-known identities, but also that the confidential identity private key is held by the counterparty, and that a party cannot claim ownership of another party’s confidential identities.
这样既确保了 confidential identity X.509 证书是由正确的 well-know identities 签过名的,又能够确保 confidential identity 的私钥是由 counterparty 持有的,并且一个 party 是不能够生成自己是另一个 party 的 confidential identities 的所有者。
IdentitySyncFlow¶
When constructing a transaction whose input states reference confidential identities, it is common for counterparties
to require knowledge of which well-known identity each confidential identity maps to. IdentitySyncFlow
handles this
process. You can see an example of its use in TwoPartyTradeFlow.kt
.
当构建一个 input states 引用了 confidential identities 的 transaction 的时候,通常会要求 counterpartieis 要知道每一个 confidential identity 应该对应于哪个 well-known identity。IdentitySyncFlow
处理了这个流程。你可以在 TwoPartyTradeFlow.kt
中看到一个例子。
IdentitySyncFlow
is divided into two parts:
IdentitySyncFlow
分为两部分:
IdentitySyncFlow.Send
IdentitySyncFlow.Receive
IdentitySyncFlow.Send
is invoked by the party initiating the identity synchronization:
IdentitySyncFlow.Send 是由初始这个 identity 同步的 party 来调用的:
// Now sign the transaction with whatever keys we need to move the cash.
val partSignedTx = serviceHub.signInitialTransaction(ptx, cashSigningPubKeys)
// Sync up confidential identities in the transaction with our counterparty
subFlow(IdentitySyncFlow.Send(sellerSession, ptx.toWireTransaction(serviceHub)))
// Send the signed transaction to the seller, who must then sign it themselves and commit
// it to the ledger by sending it to the notary.
progressTracker.currentStep = COLLECTING_SIGNATURES
val sellerSignature = subFlow(CollectSignatureFlow(partSignedTx, sellerSession, sellerSession.counterparty.owningKey))
val twiceSignedTx = partSignedTx + sellerSignature
The identity synchronization flow goes through the following key steps:
- Extract participant identities from all input and output states and remove any well known identities. Required signers on commands are currently ignored as they are presumed to be included in the participants on states, or to be well-known identities of services (such as an oracle service)
- For each counterparty node, send a list of the public keys of the confidential identities, and receive back a list of those the counterparty needs the certificate path for
- Verify the requested list of identities contains only confidential identities in the offered list, and abort otherwise
- Send the requested confidential identities as
PartyAndCertificate
instances to the counterparty
这个 identity 同步 flow 包括以下几个主要步骤:
- 从所有的 input 和 output states 中获得参与者的 identities,并且删除任何 well-known identities。Commands 中要求的签名者当前会被忽略因为他们假定会被包含在 states 上的参与者里,或者会作为 well-known identities 服务(比如 oracle service)
- 对于每个 counterparty 节点,发送一个 confidential identities 的公钥列表,并且接收 counterparty 需要证书的路径的列表
- 确认在提供的列表中,这些被请求的 identities 列表仅包含了 confidential identities,其他的会被跳过
- 将请求的 confidential identities 作为
PartyAndCertificate
实例发送给 counterparty
注解
IdentitySyncFlow
works on a push basis. The initiating node can only send confidential identities it has
the X.509 certificates for, and the remote nodes can only request confidential identities being offered (are
referenced in the transaction passed to the initiating flow). There is no standard flow for nodes to collect
confidential identities before assembling a transaction, and this is left for individual flows to manage if
required.
注解
IdentitySyncFlow
是基于推的方式工作的。发起方节点只能发送它持有的 X.509 证书的 confidential identity,远程节点也仅仅能够请求被提供的 confidential identities(会在传递给初始 flow 中的 transaction 中被引用)。这没有一个标准的流程让节点在安装一个 transaction 之前搜集 confidential identities,并且如果需要的话,这个会被留给每个单独的 flows 来管理。
Meanwhile, IdentitySyncFlow.Receive
is invoked by all the other (non-initiating) parties involved in the identity
synchronization process:
同时,IdentitySyncFlow.Receive
会被这个 identity 同步流程引入的所有其他 parties(非发起方)来调用:
// Sync identities to ensure we know all of the identities involved in the transaction we're about to
// be asked to sign
subFlow(IdentitySyncFlow.Receive(otherSideSession))
IdentitySyncFlow
will serve all confidential identities in the provided transaction, irrespective of well-known
identity. This is important for more complex transaction cases with 3+ parties, for example:
- Alice is building the transaction, and provides some input state x owned by a confidential identity of Alice
- Bob provides some input state y owned by a confidential identity of Bob
- Charlie provides some input state z owned by a confidential identity of Charlie
IdentitySyncFlow
会在被提供的 transaction 中处理所有的 confidential identities,不管是不是 well-know identity。这个对于多于3个 parties 参与的复杂 transaction 很重要,比如:
- Alice 正在创建一个 transaction,并且提供了一个 input state x,这个 state x 是由 Alice 的 confidential identity 所有
- Bob 提供了由 Bob 的一个 confidential identity 所有的一些 input state y
- Charlie 提供了一些由 Charlie 的 confidential identity 所有 input state z
Alice may know all of the confidential identities ahead of time, but Bob not know about Charlie’s and vice-versa.
The assembled transaction therefore has three input states x, y and z, for which only Alice possesses
certificates for all confidential identities. IdentitySyncFlow
must send not just Alice’s confidential identity but
also any other identities in the transaction to the Bob and Charlie.
Alice 可能提前就知道了所有的 confidential identities,但是 Bob 和 Charlie 不知道彼此的 confidential identity。这个被组装的 transaction 因此会包3个 input states x、y 和 z,并且只有 Alice 持有所有的 confidential identities 的证书。IdentitySyncFlow
必须不仅要发送 Alice 的 confidential identity,还要将 transaction 中其他的 identities 给 Bob 和 Charlie。
API: ServiceHub¶
Within FlowLogic.call
, the flow developer has access to the node’s ServiceHub
, which provides access to the
various services the node provides. The services offered by the ServiceHub
are split into the following categories:
ServiceHub.networkMapCache
- Provides information on other nodes on the network (e.g. notaries…)
ServiceHub.identityService
- Allows you to resolve anonymous identities to well-known identities if you have the required certificates
ServiceHub.attachments
- Gives you access to the node’s attachments
ServiceHub.validatedTransactions
- Gives you access to the transactions stored in the node
ServiceHub.vaultService
- Stores the node’s current and historic states
ServiceHub.keyManagementService
- Manages signing transactions and generating fresh public keys
ServiceHub.myInfo
- Other information about the node
ServiceHub.clock
- Provides access to the node’s internal time and date
在 FlowLogic.call
中,flow 开发者能够访问节点的 ServiceHub
,其提供了访问很多节点提供的服务。ServiceHub
提供的服务包括以下的类别:
ServiceHub.networkMapCache
:提供了网络中其他节点的信息(比如 notaries)ServiceHub.identityService
:如果你有所需的认证信息,允许你将匿名的 identities 变为 well-known 的 identitiesServiceHub.attachments
:给你访问节点附件的权限ServiceHub.validatedTransactions
:给你访问节点存储的 transactions 的权限ServiceHub.vaultService
:存储了节点的当前和历史的 statesServiceHub.keyManagementService
:管理签名 transactions 和生成新的公钥ServiceHub.myInfor
:关于节点的其他信息ServiceHub.clock
:提供访问节点的内部时间和日期
Additional, ServiceHub
exposes the following properties:
ServiceHub.loadState
andServiceHub.toStateAndRef
to resolve aStateRef
into aTransactionState
or aStateAndRef
ServiceHub.signInitialTransaction
to sign aTransactionBuilder
and convert it into aSignedTransaction
ServiceHub.createSignature
andServiceHub.addSignature
to create and add signatures to aSignedTransaction
另外,ServiceHub
暴露了以下的属性:
ServiceHub.loadState
和ServiceHub.toStateAndRef
来将一个StateRef
变成一个TransactionState
或者一个StateAndRef
ServiceHub.signInitialTransaction
用来给一个TransactionBuilder
签名并且将其转换成一个SignedTransaction
ServiceHub.createSignature
和ServiceHub.addSignature
用来向SignedTransaction
创建和添加签名
API: RPC 操作¶
The node’s owner interacts with the node solely via remote procedure calls (RPC). The node’s owner does not have
access to the node’s ServiceHub
.
节点的 owner 跟节点进行交互的方式是使用 remote procedure calls(RPC)。节点的 owner 没有访问节点的 ServiceHub
的权限。
The key RPC operations exposed by the node are:
CordaRPCOps.vaultQueryBy
- Extract states from the node’s vault based on a query criteria
CordaRPCOps.vaultTrackBy
- As above, but also returns an observable of future states matching the query
CordaRPCOps.networkMapFeed
- A list of network nodes, and an observable of changes to the network map
CordaRPCOps.registeredFlows
- See a list of registered flows on the node
CordaRPCOps.startFlowDynamic
- Start one of the node’s registered flows
CordaRPCOps.startTrackedFlowDynamic
- As above, but also returns a progress handle for the flow
CordaRPCOps.nodeInfo
- Returns information about the node
CordaRPCOps.currentNodeTime
- Returns the current time according to the node’s clock
CordaRPCOps.partyFromKey/CordaRPCOps.wellKnownPartyFromX500Name
- Retrieves a party on the network based on a public key or X500 name
CordaRPCOps.uploadAttachment
/CordaRPCOps.openAttachment
/CordaRPCOps.attachmentExists
- Uploads, opens and checks for the existence of attachments
主要的 RPC 操作包括:
CordaRPCOps.vaultQueryBy
:基于查询条件来从节点的账本中获取 statesCordaRPCOps.vaultTrackBy
:像上边那样,不同的时候同时还会返回满足查询条件的有关未来的states 的观察者(observable)CordaRPCOps.networkMapFeed
:一个网络节点的列表,和一个关于 network map 的变动的观察者(observable)CordaRPCOps.registeredFlows
:查看节点的注册测 flows 列表CordaRPCOps.startFlowDynamic
:开始一个在节点注册的 flowCordaRPCOps.startTrackedFlowDynamic
:像上边那样,不同的是还会同时返回一个对于 flow 的 progress handlerCordaRPCOps.nodeInfo
:返回关于节点的信息CordaRPCOps.currentNodeTime
:返回节点对应的当前时间CordaRPCOps.partyFromKey
/CordaRPCOps.wellKnownPartyFromX500Name
:根据公钥(public key)或者 X500 名字来获取网络中的某个节点信息CordaRPCOps.uploadAttachment
/CordaRPCOps.openAttachment
/CordaRPCOps.attachmentExists
:上传,打开和检查一个附件是否存在
API: 核心类型¶
Corda provides several more core classes as part of its API.
Corda 提供了以下更多的几个核心类作为它的 API 的一部分。
SecureHash¶
The SecureHash
class is used to uniquely identify objects such as transactions and attachments by their hash.
Any object that needs to be identified by its hash should implement the NamedByHash
interface:
SecureHash
类被用来唯一标识对象,比如用他们的哈希值来标识 transactions 和 attachment。任何需要使用它的哈希值来唯一标识的都应该实现 NamedByHash
接口:
/** Implemented by anything that can be named by a secure hash value (e.g. transactions, attachments). */
interface NamedByHash {
val id: SecureHash
}
SecureHash
is a sealed class that only defines a single subclass, SecureHash.SHA256
. There are utility methods
to create and parse SecureHash.SHA256
objects.
SecureHash
是一个封闭的类,它只定义了一个子类 SecureHash.SHA256
。这里有一些 utility 方法来创建和转换 SecureHash.SHA256
对象。
CompositeKey¶
Corda supports scenarios where more than one signature is required to authorise a state object transition. For example: “Either the CEO or 3 out of 5 of his assistants need to provide signatures”.
Corda 支持要求多于一个的签名来批准一个 state 对象的交换。比如:“要么 CEO 提供签名,要么他的5个秘书中的3个提供签名”。
This is achieved using a CompositeKey
, which uses public-key composition to organise the various public keys into a
tree data structure. A CompositeKey
is a tree that stores the cryptographic public key primitives in its leaves and
the composition logic in the intermediary nodes. Every intermediary node specifies a threshold of how many child
signatures it requires.
这个可以通过使用一个 CompositeKey
来实现,这个会使用公钥的组合来将不同的公钥变为一个树状的数据结构。一个 CompositeKey
是一个树状结构,它将加密的公钥的原始数据存储在了它的叶子节点,然后组合的逻辑在中间节点。每一个中间节点指定了一个临界值,这个临界值标识它需要多少个子节点的签名。
An illustration of an “either Alice and Bob, or Charlie” composite key:
一个表示 “或者 Alice 和 Bob,或者是 Charlie” 的组合键:

To allow further flexibility, each child node can have an associated custom weight (the default is 1). The threshold then specifies the minimum total weight of all children required. Our previous example can also be expressed as:
为了进一步的灵活性,每个子节点可以有一个相关的自定义 权重*(默认是1)。这个 *临界值 会指定所有的子节点要求至少总权重要达到多少。我们前一个例子也可以按照下边的表示:

Signature verification is performed in two stages:
- Given a list of signatures, each signature is verified against the expected content.
- The public keys corresponding to the signatures are matched against the leaves of the composite key tree in question, and the total combined weight of all children is calculated for every intermediary node. If all thresholds are satisfied, the composite key requirement is considered to be met.
签名的验证经过两步骤:
1. 给定一个签名列表,每个签名跟期望的内容进行比对 1. 签名对应的公钥会跟有关的组合键树结构的叶子节点匹配,并且所有子节点的总合并的权重会为每个中间节点而被计算。如果所有的临界值都满足,组合键的要求就被认为是满足的。
API: 测试¶
目录
Flow 测试¶
MockNetwork¶
Flow testing can be fully automated using a MockNetwork
composed of StartedMockNode
nodes. Each
StartedMockNode
behaves like a regular Corda node, but its services are either in-memory or mocked out.
Flow 的测试可以使用一个 MockNetwork
和 StartedMockNode
节点来完全自动的执行。每个 StartedMockNode
就像是一个常规的 Corda 节点,但是它的服务会在内存中或者是虚构的。
A MockNetwork
is created as follows:
一个 MockNetwork
向下边这样来创建:
import net.corda.core.identity.CordaX500Name
import net.corda.testing.node.MockNetwork
import net.corda.testing.node.MockNetworkParameters
import net.corda.testing.node.StartedMockNode
import net.corda.testing.node.TestCordapp.Companion.findCordapp
import org.junit.After
import org.junit.Before
class MockNetworkTestsTutorial {
private val mockNet = MockNetwork(MockNetworkParameters(listOf(findCordapp("com.mycordapp.package"))))
@After
fun cleanUp() {
mockNet.stopNodes()
}
import net.corda.core.identity.CordaX500Name;
import net.corda.testing.node.MockNetwork;
import net.corda.testing.node.MockNetworkParameters;
import net.corda.testing.node.StartedMockNode;
import org.junit.After;
import org.junit.Before;
import static java.util.Collections.singletonList;
import static net.corda.testing.node.TestCordapp.findCordapp;
public class MockNetworkTestsTutorial {
private final MockNetwork mockNet = new MockNetwork(new MockNetworkParameters(singletonList(findCordapp("com.mycordapp.package"))));
@After
public void cleanUp() {
mockNet.stopNodes();
}
The MockNetwork
requires at a minimum a list of CorDapps to be installed on each StartedMockNode
. The CorDapps are looked up on the
classpath by package name, using TestCordapp.findCordapp
. TestCordapp.findCordapp
scans the current classpath to find the CorDapp that contains the given package.
This includes all the associated CorDapp metadata present in its MANIFEST.
MockNetwork
至少需要一个将会被安装在每个 StartedMockNode
上的 CorDapps 列表。使用 TestCordapp.findCordapp
CorDapps 能够通过包名在 classpath 上被查询。TestCordapp.findCordapp
会扫描当前的 classpth 来找到包含指定包的 CorDapp。这包括了所有在它的 MANIFEST 中展示的相关的 CorDapp metadata。
MockNetworkParameters
provides other properties for the network which can be tweaked. They default to sensible values if not specified.
MockNetworkParameters
提供给了对于网络的其他属性。如果没有指定值的话,默认会使用有意义的值。
将节点添加到网络¶
Nodes are created on the MockNetwork
using:
节点可以在 MockNetwork
上被创建:
private lateinit var nodeA: StartedMockNode
private lateinit var nodeB: StartedMockNode
@Before
fun setUp() {
nodeA = mockNet.createNode()
// We can optionally give the node a name.
nodeB = mockNet.createNode(CordaX500Name("Bank B", "London", "GB"))
}
private StartedMockNode nodeA;
private StartedMockNode nodeB;
@Before
public void setUp() {
nodeA = mockNet.createNode();
// We can optionally give the node a name.
nodeB = mockNet.createNode(new CordaX500Name("Bank B", "London", "GB"));
}
Nodes added using createNode
are provided a default set of node parameters. However, it is also possible to
provide different parameters to each node using MockNodeParameters
. Of particular interest are configOverrides
which allow you to
override some of the default node configuration options. Please refer to the MockNodeConfigOverrides
class for details what can currently
be overridden. Also, the additionalCordapps
parameter allows you to add extra CorDapps to a specific node. This is useful when you wish
for all nodes to load a common CorDapp but for a subset of nodes to load CorDapps specific to their role in the network.
使用 createNode
创建的节点会被提供一系列的默认的节点参数。然而,也可以使用 MockNodeParameters
来为每个节点提供不同的参数。其中一个特别的是 configOverrides
,它允许你能够重载一些默认的节点配置。请参考 MockNodeConfigOverrides
类查看当前都有哪些可以被重载。并且 additionalCordapps
参数允许你想一个指定的节点添加额外的 CorDapp。这对于如果你想要所有的节点都运行一个通用的 CorDapp,但是对于其中的部分节点会加载针对于他们在这个网络中的角色而特定的 CorDapps 的情况更加有用。
运行网络¶
When using a MockNetwork
, you must be careful to ensure that all the nodes have processed all the relevant messages
before making assertions about the result of performing some action. For example, if you start a flow to update the ledger
but don’t wait until all the nodes involved have processed all the resulting messages, your nodes’ vaults may not be in
the state you expect.
当使用一个 MockNetwork
的时候,在你想要确认在执行一些操作之后的结果的时候,必须要小心地确保所有的节点都已经处理完所有的消息了。比如,如果你开始一个 flow 来更新账本,但是如果没有等到所有相关的节点已经处理完所有结果的信息的话,你的节点的 vaults 可能并没有到达你想要的状态。
When networkSendManuallyPumped
is set to false
, you must manually initiate the processing of received messages.
You manually process received messages as follows:
StartedMockNode.pumpReceive()
processes a single message from the node’s queueMockNetwork.runNetwork()
processes all the messages in every node’s queue until there are no further messages to process
当 networkSendManuallyPumped
被设置为 false
的时候,你必须要手动地初始一个接收消息的过程。你可以详下边这样手动地处理接收到的消息:
StartedMockNode.pumpReceive()
从节点的 queue 中处理一条信息MockNetwork.runNetwork()
处理每个节点的 queue 里的所有消息,直到没有消息需要被处理
When networkSendManuallyPumped
is set to true
, nodes will automatically process the messages they receive. You
can block until all messages have been processed using MockNetwork.waitQuiescent()
.
当 networkSendManuallyPumped
被设置为 true
的时候,节点将会自动地处理接收到的消息。你可以使用 MockNetwork.waitQuiescent()
来阻塞知道所有的消息都被处理。
警告
If threadPerNode
is set to true
, networkSendManuallyPumped
must also be set to true
.
警告
如果 threadPerNode
被设置为 true
,networkSendManuallyPumped
必须也被设置为 true
。
运行 flows¶
A StartedMockNode
starts a flow using the StartedNodeServices.startFlow
method. This method returns a future
representing the output of running the flow.
StartedMockNode
使用 StartedNodeServices.startFlow
启动一个 flow。这个方法返回一个运行这个 flow 会在将来产生的 output。
val signedTransactionFuture = nodeA.services.startFlow(IOUFlow(iouValue = 99, otherParty = nodeBParty))
CordaFuture<SignedTransaction> future = startFlow(a.getServices(), new ExampleFlow.Initiator(1, nodeBParty));
The network must then be manually run before retrieving the future’s value:
网络在接收将来的值之前必须要被手动地运行:
val signedTransactionFuture = nodeA.services.startFlow(IOUFlow(iouValue = 99, otherParty = nodeBParty))
// Assuming network.networkSendManuallyPumped == false.
network.runNetwork()
val signedTransaction = future.get();
CordaFuture<SignedTransaction> future = startFlow(a.getServices(), new ExampleFlow.Initiator(1, nodeBParty));
// Assuming network.networkSendManuallyPumped == false.
network.runNetwork();
SignedTransaction signedTransaction = future.get();
在内部访问 StartedMockNode
¶
查询节点的 vault¶
Recorded states can be retrieved from the vault of a StartedMockNode
using:
可以使用下边的代码从一个 StartedMockNode
的 vault 中获取记录的 states:
val myStates = nodeA.services.vaultService.queryBy<MyStateType>().states
List<MyStateType> myStates = node.getServices().getVaultService().queryBy(MyStateType.class).getStates();
This allows you to check whether a given state has (or has not) been stored, and whether it has the correct attributes.
这就允许你能够检查对于一个给定的 state 是否已经被存储了,以及它是否含有正确的属性。
检查一个节点的交易存储¶
Recorded transactions can be retrieved from the transaction storage of a StartedMockNode
using:
可以使用下边的代码从一个 StartedMockNode
的交易存储中获取回来已经记录的交易信息:
val transaction = nodeA.services.validatedTransactions.getTransaction(transaction.id)
SignedTransaction transaction = nodeA.getServices().getValidatedTransactions().getTransaction(transaction.getId())
This allows you to check whether a given transaction has (or has not) been stored, and whether it has the correct attributes.
这就允许你能够检查对于一个给定的交易是否已经被存储了,以及它是否含有正确的属性。
This allows you to check whether a given state has (or has not) been stored, and whether it has the correct attributes.
这就允许你能够检查对于一个给定的 state 是否已经被存储了,以及它是否含有正确的属性。
Contract 测试¶
The Corda test framework includes the ability to create a test ledger by calling the ledger
function
on an implementation of the ServiceHub
interface.
Corda 测试框架包含了通过在一个 ServiceHub
接口的实现之上调用 ledger
方法创建一个测试账本的能力。
测试 identities¶
You can create dummy identities to use in test transactions using the TestIdentity
class:
你可以使用 TestIdentity
类来创建可以用于测试交易的虚构的 identities:
val bigCorp = TestIdentity((CordaX500Name("BigCorp", "New York", "GB")))
private static final TestIdentity bigCorp = new TestIdentity(new CordaX500Name("BigCorp", "New York", "GB"));
TestIdentity
exposes the following fields and methods:
TestIdentity
暴露了下边的字段和方法:
val identityParty: Party = bigCorp.party
val identityName: CordaX500Name = bigCorp.name
val identityPubKey: PublicKey = bigCorp.publicKey
val identityKeyPair: KeyPair = bigCorp.keyPair
val identityPartyAndCertificate: PartyAndCertificate = bigCorp.identity
Party identityParty = bigCorp.getParty();
CordaX500Name identityName = bigCorp.getName();
PublicKey identityPubKey = bigCorp.getPublicKey();
KeyPair identityKeyPair = bigCorp.getKeyPair();
PartyAndCertificate identityPartyAndCertificate = bigCorp.getIdentity();
You can also create a unique TestIdentity
using the fresh
method:
你也可以使用 fresh
方法创建一个唯一的 TestIdentity
:
val uniqueTestIdentity: TestIdentity = TestIdentity.fresh("orgName")
TestIdentity uniqueTestIdentity = TestIdentity.Companion.fresh("orgName");
MockServices¶
A mock implementation of ServiceHub
is provided in MockServices
. This is a minimal ServiceHub
that
suffices to test contract logic. It has the ability to insert states into the vault, query the vault, and
construct and check transactions.
在 MockServices
中提供了对于 ServiceHub
的一个虚拟的实现。这是一个最小化的 ServiceHub
足够用来测试 contract 逻辑。它能够将 states 插入到 vault,查询 vault,以及构建和检查 transactions。
private val ledgerServices = MockServices(
// A list of packages to scan for cordapps
listOf("net.corda.finance.contracts"),
// The identity represented by this set of mock services. Defaults to a test identity.
// You can also use the alternative parameter initialIdentityName which accepts a
// [CordaX500Name]
megaCorp,
mock<IdentityService>().also {
doReturn(megaCorp.party).whenever(it).partyFromKey(megaCorp.publicKey)
doReturn(null).whenever(it).partyFromKey(bigCorp.publicKey)
doReturn(null).whenever(it).partyFromKey(alice.publicKey)
})
ledgerServices = new MockServices(
// A list of packages to scan for cordapps
singletonList("net.corda.finance.contracts"),
// The identity represented by this set of mock services. Defaults to a test identity.
// You can also use the alternative parameter initialIdentityName which accepts a
// [CordaX500Name]
megaCorp,
// An implementation of [IdentityService], which contains a list of all identities known
// to the node. Use [makeTestIdentityService] which returns an implementation of
// [InMemoryIdentityService] with the given identities
makeTestIdentityService(megaCorp.getIdentity())
);
Alternatively, there is a helper constructor which just accepts a list of TestIdentity
. The first identity provided is
the identity of the node whose ServiceHub
is being mocked, and any subsequent identities are identities that the node
knows about. Only the calling package is scanned for cordapps and a test IdentityService
is created
for you, using all the given identities.
或者,这里还有一个仅仅接收一个 TestIdentity
列表的 helper 构造函数。提供的第一个 identity 是模拟 ServiceHub
的节点的 identity,后续的 identities 是这个节点了解的其他的节点的 identities。只有这个调用的包会被扫描 CorDapps 并且一个测试的 IdentityService
会使用所有给定的属性被创建。
@Suppress("unused")
private val simpleLedgerServices = MockServices(
// This is the identity of the node
megaCorp,
// Other identities the test node knows about
bigCorp,
alice
)
private final MockServices simpleLedgerServices = new MockServices(
// This is the identity of the node
megaCorp,
// Other identities the test node knows about
bigCorp,
alice
);
使用一个测试账本来编写测试¶
The ServiceHub.ledger
extension function allows you to create a test ledger. Within the ledger wrapper you can create
transactions using the transaction
function. Within a transaction you can define the input
and
output
states for the transaction, alongside any commands that are being executed, the timeWindow
in which the
transaction has been executed, and any attachments
, as shown in this example test:
ServiceHub.ledger
扩展方法允许你能够创建一个测试的账本。在账本的 wrapper 中,你可以使用 transaction
方法来创建 transactions。在一个 transaction 中,你可以为 transaction 定义 input
和 output
states,以及任何的会被执行的 commands,transaction 需要遵循的 timeWindow
,和任何的 attachments
,就像下边的测试例子:
@Test
fun simpleCPMoveSuccess() {
val inState = getPaper()
ledgerServices.ledger(dummyNotary.party) {
transaction {
input(CP_PROGRAM_ID, inState)
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
attachments(CP_PROGRAM_ID)
timeWindow(TEST_TX_TIME)
output(CP_PROGRAM_ID, "alice's paper", inState.withOwner(alice.party))
verifies()
}
}
}
@Test
public void simpleCPMoveSuccess() {
ICommercialPaperState inState = getPaper();
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.input(JCP_PROGRAM_ID, inState);
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
tx.attachments(JCP_PROGRAM_ID);
tx.timeWindow(TEST_TX_TIME);
tx.output(JCP_PROGRAM_ID, "alice's paper", inState.withOwner(alice.getParty()));
return tx.verifies();
});
return Unit.INSTANCE;
});
}
Once all the transaction components have been specified, you can run verifies()
to check that the given transaction is valid.
当所有的 transaction 组件都被指定,你可以运行 verifies()
来检查给定的 transaction 是否是有效的。
检查失败的 states¶
In order to test for failures, you can use the failsWith
method, or in Kotlin the fails with
helper method, which
assert that the transaction fails with a specific error. If you just want to assert that the transaction has failed without
verifying the message, there is also a fails
method.
为了测试失败的情况,你可以使用 failsWith
方法,或者在 kotlin 中 fails with
helper 方法,它可以造成 transaction 由于一个指定的错误而失败。如果你只是想造成 transaction 失败而不需要指定消息的话,也可以使用 fails
方法。
@Test
fun simpleCPMoveFails() {
val inState = getPaper()
ledgerServices.ledger(dummyNotary.party) {
transaction {
input(CP_PROGRAM_ID, inState)
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
attachments(CP_PROGRAM_ID)
`fails with`("the state is propagated")
}
}
}
@Test
public void simpleCPMoveFails() {
ICommercialPaperState inState = getPaper();
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.input(JCP_PROGRAM_ID, inState);
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
tx.attachments(JCP_PROGRAM_ID);
return tx.failsWith("the state is propagated");
});
return Unit.INSTANCE;
});
}
注解
The transaction DSL forces the last line of the test to be either a verifies
or fails with
statement.
注解
Transaction DSL 强制这个测试的最后一行或者是一个 verifies
或者是 fails with
语句。
一次测试多个场景¶
Within a single transaction block, you can assert several times that the transaction constructed so far either passes or fails verification. For example, you could test that a contract fails to verify because it has no output states, and then add the relevant output state and check that the contract verifies successfully, as in the following example:
在一个 transaction 块中,对于构建好的一个 transaction,你可以制造出多次的成功或者失败的验证。比如,你可以测试一个 contract 由于没有 output states 而失败,然后添加相关的 output state 并且检查这个 contract 是否能够成功,像下边的例子那样:
@Test
fun simpleCPMoveFailureAndSuccess() {
val inState = getPaper()
ledgerServices.ledger(dummyNotary.party) {
transaction {
input(CP_PROGRAM_ID, inState)
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
attachments(CP_PROGRAM_ID)
`fails with`("the state is propagated")
output(CP_PROGRAM_ID, "alice's paper", inState.withOwner(alice.party))
verifies()
}
}
}
@Test
public void simpleCPMoveSuccessAndFailure() {
ICommercialPaperState inState = getPaper();
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.input(JCP_PROGRAM_ID, inState);
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
tx.attachments(JCP_PROGRAM_ID);
tx.failsWith("the state is propagated");
tx.output(JCP_PROGRAM_ID, "alice's paper", inState.withOwner(alice.getParty()));
return tx.verifies();
});
return Unit.INSTANCE;
});
}
You can also use the tweak
function to create a locally scoped transaction that you can make changes to
and then return to the original, unmodified transaction. As in the following example:
你也可以使用 tweak
方法来创建一个本地范围的 transaction,你就可以对它进行改动然后返回给原始的没有改变过的 transaction。像下边的例子那样:
@Test
fun `simple issuance with tweak and top level transaction`() {
ledgerServices.transaction(dummyNotary.party) {
output(CP_PROGRAM_ID, "paper", getPaper()) // Some CP is issued onto the ledger by MegaCorp.
attachments(CP_PROGRAM_ID)
tweak {
// The wrong pubkey.
command(bigCorp.publicKey, CommercialPaper.Commands.Issue())
timeWindow(TEST_TX_TIME)
`fails with`("output states are issued by a command signer")
}
command(megaCorp.publicKey, CommercialPaper.Commands.Issue())
timeWindow(TEST_TX_TIME)
verifies()
}
}
@Test
public void simpleIssuanceWithTweakTopLevelTx() {
transaction(ledgerServices, tx -> {
tx.output(JCP_PROGRAM_ID, "paper", getPaper()); // Some CP is issued onto the ledger by MegaCorp.
tx.attachments(JCP_PROGRAM_ID);
tx.tweak(tw -> {
tw.command(bigCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tw.timeWindow(TEST_TX_TIME);
return tw.failsWith("output states are issued by a command signer");
});
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tx.timeWindow(TEST_TX_TIME);
return tx.verifies();
});
}
将 transactions 链起来¶
The following example shows that within a ledger
, you can create more than one transaction
in order to test chains
of transactions. In addition to transaction
, unverifiedTransaction
can be used, as in the example below, to create
transactions on the ledger without verifying them, for pre-populating the ledger with existing data. When chaining transactions,
it is important to note that even though a transaction
verifies
successfully, the overall ledger may not be valid. This can
be verified separately by placing a verifies
or fails
statement within the ledger
block.
下边的例子显示了在一个 ledger
中,你可以创建多于一个的 transaction
来测试 transactions 链。除了 transaction
,``unverifiedTransaction`` 也可以像下边的例子那样被用来在账本上创建 transaction 而不需要验证它们,以此向账本中预先录入一些已经存在的数据。当把 transactions 链起来的时候,很重要的需要注意的一点是尽管一个 transaction
verifies
成功了,但是整个账本可能不是有效的。这个可以通过使用一个在 ledger
中的 verifies
或者 fails
语句分别来验证。
@Test
fun `chain commercial paper double spend`() {
val issuer = megaCorp.party.ref(123)
ledgerServices.ledger(dummyNotary.party) {
unverifiedTransaction {
attachments(Cash.PROGRAM_ID)
output(Cash.PROGRAM_ID, "alice's $900", 900.DOLLARS.CASH issuedBy issuer ownedBy alice.party)
}
// Some CP is issued onto the ledger by MegaCorp.
transaction("Issuance") {
output(CP_PROGRAM_ID, "paper", getPaper())
command(megaCorp.publicKey, CommercialPaper.Commands.Issue())
attachments(CP_PROGRAM_ID)
timeWindow(TEST_TX_TIME)
verifies()
}
transaction("Trade") {
input("paper")
input("alice's $900")
output(Cash.PROGRAM_ID, "borrowed $900", 900.DOLLARS.CASH issuedBy issuer ownedBy megaCorp.party)
output(CP_PROGRAM_ID, "alice's paper", "paper".output<ICommercialPaperState>().withOwner(alice.party))
command(alice.publicKey, Cash.Commands.Move())
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
verifies()
}
transaction {
input("paper")
// We moved a paper to another pubkey.
output(CP_PROGRAM_ID, "bob's paper", "paper".output<ICommercialPaperState>().withOwner(bob.party))
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
verifies()
}
fails()
}
}
@Test
public void chainCommercialPaperDoubleSpend() {
PartyAndReference issuer = megaCorp.ref(defaultRef);
ledger(ledgerServices, l -> {
l.unverifiedTransaction(tx -> {
tx.output(Cash.PROGRAM_ID, "alice's $900",
new Cash.State(issuedBy(DOLLARS(900), issuer), alice.getParty()));
tx.attachments(Cash.PROGRAM_ID);
return Unit.INSTANCE;
});
// Some CP is issued onto the ledger by MegaCorp.
l.transaction("Issuance", tx -> {
tx.output(JCP_PROGRAM_ID, "paper", getPaper());
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tx.attachments(JCP_PROGRAM_ID);
tx.timeWindow(TEST_TX_TIME);
return tx.verifies();
});
l.transaction("Trade", tx -> {
tx.input("paper");
tx.input("alice's $900");
tx.output(Cash.PROGRAM_ID, "borrowed $900", new Cash.State(issuedBy(DOLLARS(900), issuer), megaCorp.getParty()));
JavaCommercialPaper.State inputPaper = l.retrieveOutput(JavaCommercialPaper.State.class, "paper");
tx.output(JCP_PROGRAM_ID, "alice's paper", inputPaper.withOwner(alice.getParty()));
tx.command(alice.getPublicKey(), new Cash.Commands.Move());
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
return tx.verifies();
});
l.transaction(tx -> {
tx.input("paper");
JavaCommercialPaper.State inputPaper = l.retrieveOutput(JavaCommercialPaper.State.class, "paper");
// We moved a paper to other pubkey.
tx.output(JCP_PROGRAM_ID, "bob's paper", inputPaper.withOwner(bob.getParty()));
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
return tx.verifies();
});
l.fails();
return Unit.INSTANCE;
});
}
Before reading this page, you should be familiar with the key concepts of Corda.
在阅读这里之前,你应该先熟悉一下 Corda 核心概念。
API 稳定性保证¶
Corda makes certain commitments about what parts of the API will preserve backwards compatibility as they change and which will not. Over time, more of the API will fall under the stability guarantees. Thus, APIs can be categorized in the following 2 broad categories:
- public APIs, for which API/ABI backwards compatibility guarantees are provided. See: 公共 API
- non-public APIs, for which no backwards compatibility guarantees are provided. See: 非公共 APIs (探索性的)
Corda 提供给了一些承诺来确保有些 API 是能够向后兼容的,有些是不能兼容的。随着时间的推移,越来越多的 API 会慢慢变为可兼容的。因此,APIs 可以分为一下两个类型:
- 公共 APIs,这些 API/ABI 能够确保向后兼容性。查看 公共 API
- 非公共 APIs,这类 API 不具备向后兼容性。查看 非公共 APIs (探索性的)
公共 API¶
The following modules form part of Corda’s public API and we commit to API/ABI backwards compatibility in following releases, unless an incompatible change is required for security reasons:
- Core (net.corda.core): core Corda libraries such as crypto functions, types for Corda’s building blocks: states, contracts, transactions, attachments, etc. and some interfaces for nodes and protocols
- Client RPC (net.corda.client.rpc): client RPC
- Client Jackson (net.corda.client.jackson): JSON support for client applications
- DSL Test Utils (net.corda.testing.dsl): a simple DSL for building pseudo-transactions (not the same as the wire protocol) for testing purposes.
- Test Node Driver (net.corda.testing.node, net.corda.testing.driver): test utilities to run nodes programmatically
- Test Utils (net.corda.testing.core): generic test utilities
- Http Test Utils (net.corda.testing.http): a small set of utilities for making HttpCalls, aimed at demos and tests.
- Dummy Contracts (net.corda.testing.contracts): dummy state and contracts for testing purposes
- Mock Services (net.corda.testing.services): mock service implementations for testing purposes
下边这些模块是来自于部分 Corda 的公共 API,我们保证在以后的 releases 中,这些 API/ABI 是向后兼容的,除非因为安全原因必须要有一个非兼容的改动:
- Core (net.corda.core): Corda 的核心类库,比如加密功能,Corda 的构建模块类型:states, contracts, transactions, attachments 等等,以及关于节点和协议的一些接口
- Client RPC (net.corda.client.rpc): 客户端 RPC
- Client Jackson (net.corda.client.jackson): 对于客户端应用程序的 JSON 支持
- DSL Test Utils (net.corda.testing.dsl): 为了测试的目的,一个为了构建 pseudo-transactions 的简单的 DSL (不同于 wire protocol)
- Test Node Driver (net.corda.testing.node, net.corda.testing.driver): 通过程序来运行节点的测试单元
- Test Utils (net.corda.testing.core): 常规测试单元
- Http Test Utils (net.corda.testing.http): 用来进行 HttpCalls,为了 demos 和测试的目的而需要的一个小的工具
- Dummy Contracts (net.corda.testing.contracts): 为了测试的目的而创建的虚构的 state 和 contracts
- Mock Services (net.corda.testing.services): 为了测试的目的而创建虚构的服务实现
非公共 APIs (探索性的)¶
The following modules are not part of the Corda’s public API and no backwards compatibility guarantees are provided. They are further categorized in 2 classes:
- the incubating modules, for which we will do our best to minimise disruption to developers using them until we are able to graduate them into the public API
- the internal modules, which are not to be used, and will change without notice
下边的模块不是 Corda 公共 API,并且不具备向后兼容性。他们可以分为两种类型:
- 正在孵化的模块,我们会进我们最大的努力减小开发者使用他们时造成的影响,知道我们能够保证他们成为公共 API
- 内部的模块,这些是没有被使用的,并且可能会改动而没有任何通知
Corda 孵化中的模块¶
- net.corda.confidential: experimental support for confidential identities on the ledger
- net.corda.finance: a range of elementary contracts (and associated schemas) and protocols, such as abstract fungible assets, cash, obligation and commercial paper
- net.corda.client.jfx: support for Java FX UI
- net.corda.client.mock: client mock utilities
- Cordformation: Gradle integration plugins
- net.corda.confidential: 对于账本上保密的身份的探索
- net.corda.finance: 一系列的基本的合约(和相关的 schemas)以及协议,比如抽象的 fungible assets、现金、债务以及商业票据
- net.corda.client.jfx: 对于 Java FX UI 的支持
- net.corda.client.mock: 客户端的模拟工具
- Cordformation: Gradle 集成插件
Corda 内部模块¶
Everything else is internal and will change without notice, even deleted, and should not be used. This also includes any package that has
.internal
in it. So for example, net.corda.core.internal
and sub-packages should not be used.
其他的模块就是内部模块,可能会被改动而没有任何的通知,甚至是删除,所以这些是不应该被使用的。这也包括了任何其中包含 .internal
的模块。比如 net.corda.core.internal
和它的子开发包都不应该被使用。
Some of the public modules may depend on internal modules, so be careful to not rely on these transitive dependencies. In particular, the testing modules depend on the node module and so you may end having the node in your test classpath.
有些公共模块可能会依赖于一些内部的模块,所以要小心不要使用这些有过度性依赖的模块。特别要指出的是,测试的模块依赖于节点模块,所以在你的测试 classpath 中可能会没有它。
警告
The web server module will be removed in future. You should call Corda nodes through RPC from your web server of choice e.g., Spring Boot, Vertx, Undertow.
警告
Web server 模块在将来会被移除。你应该使用自己的 web server(Spring Boot、Vertx、Undertow) 使用 RPC 来调用 Corda 节点。
@DoNotImplement
注解¶
Certain interfaces and abstract classes within the Corda API have been annotated
as @DoNotImplement
. While we undertake not to remove or modify any of these classes’ existing
functionality, the annotation is a warning that we may need to extend them in future versions of Corda.
Cordapp developers should therefore just use these classes “as is”, and not attempt to extend or implement any of them themselves.
一些在 Corda API 中的接口和抽象类含有 @DoNotImplement
的注解。我们不会移除或者修改这些类已经存在的这些功能,这个注解表示我们会在以后版本的 Corda 中需要对其进行扩展。CorDapp 开发者应该仅仅按照现在这样去使用他们,并且 不要 自己尝试去扩展或实现这些类。
This annotation is inherited by subclasses and sub-interfaces.
这个注解会被子类或者子接口继承。
快速开始¶
Welcome to the Corda Quickstart Guide. Follow the links below to help get going quickly with Corda.
欢迎来到 Corda 快速开始指南。跟着下边的链接能够帮助你快速地了解 Corda。
I want to:
我想要:
- Learn about Corda for the first time
- Develop a CorDapp
- Run and test a CorDapp on a local Corda network
- Add a node to an existing test Corda network
- Add a node to an existing production network
- 第一次 学习 Corda
- 开发 一个 CorDapp
- 在本地的 Corda 网络上 运行 并且测试一个 CorDapp
- 向一个已经存在的测试 Corda 网络中 添加 一个节点
- 向一个生产环境中 添加 一个节点
第一次学习 Corda¶
有用的链接 | 描述 |
---|---|
核心概念 | Corda 平台的核心概念和功能 | |
快速搭建 CorDapp 开发环境 | 配置你的电脑来运行和开发 CorDapps | |
运行 CorDapp 例子 | 运行一个简单的 CorDapp 的指南 |
开发一个 CorDapp¶
有用的链接 | 描述 |
---|---|
Hello, World! | 一个基本的 CorDapp 的开发过程 | |
什么是 CorDapp? | 关于 CordApps 的介绍 | |
编写一个 CorDapp | 如何构建一个 CorDapp 项目 | |
Building and installing a CorDapp | 如何构建 a CorDapp | |
Corda API | 一个关于 CorDapp API 的介绍 |
在本地 Corda 网络中运行和测试一个 CorDapp¶
有用的链接 | 描述 |
---|---|
创建本地节点 | 一个关于在本地的 Docker 上创建用于开发和测试的 Corda 节点的指南 | |
节点文件夹结构 | Corda 节点文件夹结构以及该如何命名你的节点 | |
节点的配置 | 带有例子的 Corda 节点配置文件的详细描述 | |
Running nodes locally | 关于如何在本地的 Docker 上运行 Corda 节点的指南 | |
Setting up a dynamic compatibility zone | 关于配置一个 Corda 网络的需要考虑的问题 | |
Node shell | 关于如何使用一个内置的命令行来控制和监控一个节点的指南 | |
管理节点 | 如何使用一个 RPC 接口来监控一个 Corda 节点 | |
Node Explorer | 一个基于 GUI 的用来查看一个节点的交易数据和交易历史的工具 |
向一个已经存在的测试 Corda 网络中添加一个节点¶
有用的链接 | 描述 |
---|---|
节点文件夹结构 | Corda 文件夹结构以及你该如何命名你的节点 | |
节点的配置 | 带有例子的 Corda 节点配置文件的详细描述 | |
部署节点 | 一个详细的向你自己的服务器上部署一个 Corda 节点的指南 | |
Azure Marketplace | 一个详细的在 Azure 上创建一个 Corda 网络的指南 | |
AWS Marketplace | 一个详细的在 AWS 上创建一个 Corda 网络的指南 | |
Node shell | 一个关于如何使用内置的命令行来管理和监控一个节点的指南 | |
管理节点 | 如何使用 RPC 接口来监控一个 Corda 节点 | |
Node Explorer | 一个基于 GUI 的用来查看一个节点的交易数据和交易历史的工具 | |
Blob 查看器 | 一个允许你读取一个二进制的 blob 文件的查错工具 |
向一个已经存在的生产环境中添加一个节点¶
Corda 网络是一个由 Corda 节点构成的全球的生产网络,由独立的 Corda 网络基金会来维护。你能够在这里学习到更多的内容: https://corda.network/participation/index.html |
Corda Testnet 是一个测试网络,是 R3 为社区而维护的。你能够在这里学习到更多的内容: https://testnet.corda.network |
核心概念¶
This section describes the key concepts and features of the Corda platform. It is intended for readers who are new to Corda, and want to understand its architecture. It does not contain any code, and is suitable for non-developers.
这部分描述了核心的概念以及 Corda 平台的功能。这是针对于初次接触 Corda 并且向理解它的架构的阅读者准备的。这里并不包含任何的代码,适合非开发人员。
This section should be read in order:
应该按照下边的顺序阅读这部分:
网络¶
概要
- A Corda network is made up of nodes running Corda and CorDapps
- Communication between nodes is point-to-point, instead of relying on global broadcasts
- Each node has a certificate mapping their network identity to a real-world legal identity
- The network is permissioned, with access requiring a certificate from the network operator
- 一个 Corda 网络是由运行着 Corda 和 CorDapps 的节点构成的
- 不同节点间的沟通是点对点的,而不依赖于全局广播
- *每个节点都可以使用一个数字证书来将真实世界中的法律身份和网络身份相关联
- 这个网络是一个需要许可的网络,需要从网络维护者那里申请一个数字证书来获得访问权限
网络结构¶
A Corda network is a peer-to-peer network of nodes. Each node runs the Corda software as well as Corda applications known as CorDapps.
Corda 网络是一个 peer-to-peer 的由 节点 组成的网络,每个节点运行着 Corda 的软件以及被称为 CorDapps 的 Corda 应用程序。

All communication between nodes is point-to-point and encrypted using transport-layer security. This means that data is shared only on a need-to-know basis. There are no global broadcasts.
节点间的通信是点对点的并且使用 TLS 进行加密。这意味着数据是基于需要了解的基础来共享的。这里 没有全局广播。
身份¶
Each node has a single well-known identity. The node’s identity is used to represent the node in transactions, such as when purchasing an asset.
每个节点具有一个众所周知的身份。节点的身份被用来在交易中代表这个节点,比如在进行一笔资产的交易的时候。
注解
These identities are distinct from the RPC user logins that are able to connect to the node via RPC.
注解
这些身份跟能够通过 RPC 来跟节点进行连接的 RPC 用户登录的身份是不同的。
Each network has a network map service that maps each well-known node identity to an IP address. These IP addresses are used for messaging between nodes.
每个网络具有一个 network map service 将每个众所周知的的节点身份同它的 IP 地址进行映射。这个 IP 地址就被用来在节点间进行通信。
Nodes can also generate confidential identities for individual transactions. The certificate chain linking a confidential identity to a well-known node identity or real-world legal identity is only distributed on a need-to-know basis. This ensures that even if an attacker gets access to an unencrypted transaction, they cannot identify the transaction’s participants without additional information if confidential identities are being used.
节点也可以为了个人的交易而生成一个保密的身份。一个保密身份关联到一个众所周知的身份或者真实世界中的法律身份的数字证书链只会按照需要知道的基础进行分发。这确保了即使一个黑客能够访问一笔未加密的交易,如果保密的身份被使用的话,如果没有额外的信息,他也无法识别出来交易的参与者。
网络的管理¶
Corda networks are semi-private. To join a network, a node must obtain a certificate from the network operator. This certificate maps a well-known node identity to:
Corda 网络是半私有化的。想要加入一个网络,一个节点必须要从网络维护者那里获得一个数字证书。这个证书会将一个众所周知的节点身份映射到:
- A real-world legal identity
- A public key
- 一个真实世界中的法律身份
- 一个公钥
The network operator enforces rules regarding the information that nodes must provide and the know-your-customer processes they must undergo before being granted this certificate.
这个网络的维护者会强制在办法证书之前,关于节点要提供的信息规则必须被遵守,并且了解你的客户的流程必须被执行。
账本¶
概要
- The ledger is subjective from each peer’s perspective
- Two peers are always guaranteed to see the exact same version of any on-ledger facts they share
- 每个账本是针对于每一个节点的
- 对于账本上的共享事实,共享的两方(或多方)总是能够保证存在他们自己的账本中的事实是完全一致的
概览¶
In Corda, there is no single central store of data. Instead, each node maintains a separate database of known facts. As a result, each peer only sees a subset of facts on the ledger, and no peer is aware of the ledger in its entirety.
在 Corda 中是 没有唯一的中心化存储的数据 的。相反,每个节点维护这一个独立的数据库,其中包含了所知道的事实。所以每个 peer 只能够看到账本中的事实中的一部分,没有节点能够知道所有的内容。
For example, imagine a network with five nodes, where each coloured circle represents a shared fact:
例如,设想一个网络中有五个节点,每一个彩色的圆圈代表了一个共享的事实

We can see that although Carl, Demi and Ed are aware of shared fact 3, Alice and Bob are not.
我们可以看到,尽管 Carl,Demi 和 Ed 了解共享的事实 3,但是 Alice 和 Bob 是不知道的。
Equally importantly, Corda guarantees that whenever one of these facts is shared by multiple nodes on the network, it evolves in lockstep in the database of every node that is aware of it:
同样重要的是,Corda 确保了一旦这些事实中的一个被网络中的多个节点间共享了的话,网络中的所有知道这个事实的节点的数据库会同时被更新:

For example, Alice and Bob will both see the exact same version of shared facts 1 and 7.
例如, Alice 和 Bob 将会都能够看到 完全一致版本 的共享事实 1 和 7。
States¶
概要
- States represent on-ledger facts
- States are evolved by marking the current state as historic and creating an updated state
- Each node has a vault where it stores any relevant states to itself
- State 代表的是存在账本上的事实
- State 通过将原来的 State 变为历史记录然后添加一条新版本的 state 的方式来对 state 进行更新
- 每个节点都有一个 vault 来存储该节点所有相关的 States
概览¶
A state is an immutable object representing a fact known by one or more Corda nodes at a specific point in time. States can contain arbitrary data, allowing them to represent facts of any kind (e.g. stocks, bonds, loans, KYC data, identity information…).
一个 state 代表的是一个不可修改的用来代表一个事实的对象,这个事实在某个时间点会被网络中的一个或者多个 Corda 节点所知道。States 可以包含任何的数据,这就允许了它可以代表任何的类型的事实(比如股票,借款,KYC 数据,身份数据等等)。
For example, the following state represents an IOU - an agreement that Alice owes Bob an amount X:
例如,下边的 state 代表了一个 IOU - 一个表示 Alice 欠 Bob 一定数量的钱的协议:

Specifically, this state represents an IOU of £10 from Alice to Bob.
上边的 State 代表了一个从 Alice 到 Bob 的 £10 的 IOU。
As well as any information about the fact itself, the state also contains a reference to the contract that governs the evolution of the state over time. We discuss contracts in Contracts.
除了关于这个事实的信息本身,State 还包含了一个 合约 的引用,这个合约定义了 state 不断变化的原则。我们在 Contracts 会讨论合约。
State 顺序¶
As states are immutable, they cannot be modified directly to reflect a change in the state of the world.
因为 states 是不可变更的,他们不能够被直接修改来反应 state 的变化。
Instead, the lifecycle of a shared fact over time is represented by a state sequence. When a state needs to be updated, we create a new version of the state representing the new state of the world, and mark the existing state as historic.
但是一个共享的事实的生命周期是可以通过 state 顺序 来体现。当一个 state 需要更新的时候,我们会创建一个代表新的 state 的新版本的 state,然后将原来的那个 state 标注为历史版本。
This sequence of state replacements gives us a full view of the evolution of the shared fact over time. We can picture this situation as follows:
这种 state 有序的替换模式能够让我们看到关于一个共享事实的整个生命周期。我们可以用下图来表示这个

Vault¶
Each node on the network maintains a vault - a database where it tracks all the current and historic states that it is aware of, and which it considers to be relevant to itself:
Corda 网络中的每一个节点都维护着一个 vault - 它是一个数据库,其中跟踪了所有 states 的当前以及历史的 states 数据,以及跟它有关的数据:

We can think of the ledger from each node’s point of view as the set of all the current (i.e. non-historic) states that it is aware of.
我们可以从每个节点角度将账本理解为节点所关注的一系列当前状态的 states。
参考 states¶
Not all states need to be updated by the parties which use them. In the case of reference data, there is a common pattern where one party creates reference data, which is then used (but not updated) by other parties. For this use-case, the states containing reference data are referred to as “reference states”. Syntactically, reference states are no different to regular states. However, they are treated different by Corda transactions. See Transactions for more details.
并不是所有的 states 都需要被使用他们的节点来更新的。对于参数数据的情况,有一种常见的模式,一方创建了参考数据,这些数据会被其他方使用(但是不会被更新)。对于这种情况,states 中包含的参考数据被称为 “参考 states”。从语法上讲,参考 states 跟常规的 states 并没有什么不同。然而,Corda 交易会对他们进行不同的处理。浏览 Transactions 了解更详细的的信息。
Transactions¶
概要
- Transactions are proposals to update the ledger
- A transaction proposal will only be committed if:
- It doesn’t contain double-spends
- It is contractually valid
- It is signed by the required parties
- Transaction 是关于更新账本的提议
- 一个 transaction 提议只能在满足以下条件的时候才会被提交:
- 它不包含“双花”
- 它是合约有效的
- 它需要被所有相关方提供签名
概览¶
Corda uses a UTXO (unspent transaction output) model where every state on the ledger is immutable. The ledger evolves over time by applying transactions, which update the ledger by marking zero or more existing ledger states as historic (the inputs) and producing zero or more new ledger states (the outputs). Transactions represent a single link in the state sequences seen in States.
Corda 使用 UTXO (未消费的 transaction output) 模型来使账本上的每条 state 都不可更改。对于账本上数据的变更都是通过使用 transaction 的方式来做的,就是将 0 条或多条已经存在的账本 states 变为历史记录(inputs),然后再新增0条或多条新的账本 states (outputs)。交易代表了 state 顺序中的一个单独的链接,查看 States。
Here is an example of an update transaction, with two inputs and two outputs:
下边是一个更新的 transaction 的例子,包括了两个 inputs 和两个 outputs:

A transaction can contain any number of inputs, outputs and references of any type:
一个 transaction 中可以包含任何数量任何类型的 inputs 和 outputs:
- They can include many different state types (e.g. both cash and bonds)
- They can be issuances (have zero inputs) or exits (have zero outputs)
- They can merge or split fungible assets (e.g. combining a $2 state and a $5 state into a $7 cash state)
- 可以包含多种类型的 state(cash, bonds)
- 可以是 issuances 类型(有0个input)或者 exists 类型(有0个 output)
- 可以合并或拆分可替换的资产(比如把一个 $2 的 state 和 $5 的 state 合并为 $7 的 cash state)
Transactions are atomic: either all the transaction’s proposed changes are accepted, or none are.
Transaction 是 原子性操作,一个 transaction 里边的所有 changes 必须要全部执行,或者一个也不会执行。
There are two basic types of transactions:
有两种基本类型的 transactions:
交易链¶
When creating a new transaction, the output states that the transaction will propose do not exist yet, and must therefore be created by the proposer(s) of the transaction. However, the input states already exist as the outputs of previous transactions. We therefore include them in the proposed transaction by reference.
一个新的 transaction 的 output state 在账本中应该是还不存在的,所以需要提出 transaction 的一方来创建。但是 transaction 中包含的 input 应该是在账本中已经存在的,应该是前一个 transaction 添加进去的 output。所以我们需要在新的 transaction 中引用这些已经存在的记录。
These input states references are a combination of:
这些 Input state 的引用包含两部分:
- The hash of the transaction that created the input
- The input’s index in the outputs of the previous transaction
- 创建这个 input 的 transaction 的 hash
- 这个 input 所指的前一个 transaction 带来的 output state 在 output list 中的位置或者索引值
This situation can be illustrated as follows:
下图描述了一个 transaction 链:

These input state references link together transactions over time, forming what is known as a transaction chain.
这些 input state 引用将 transaction 连接在了一起,形成了所谓的 交易链。
提交交易¶
Initially, a transaction is just a proposal to update the ledger. It represents the future state of the ledger that is desired by the transaction builder(s):
初始的时候,一个 transaction 仅仅是一个更新账本的 提议。它表示了经过这次更新后账本的新的状态:

To become reality, the transaction must receive signatures from all of the required signers (see Commands, below). Each required signer appends their signature to the transaction to indicate that they approve the proposal:
为了成为真正的一笔交易,transaction 必须要获得所有 要求的签名*(查看下边的 **command*)。每一个要求的签名者会将签名附加在 transaction 上来表示他们已经同意了这次更新:

If all of the required signatures are gathered, the transaction becomes committed:
如果得到了所有需要的签名,这个 transaction 就会被提交了:

This means that:
- The transaction’s inputs are marked as historic, and cannot be used in any future transactions
- The transaction’s outputs become part of the current state of the ledger
这意味着:
- Transaction 的 input 被标注为历史记录,并且不能再被之后的 transactions 使用了
- Transaction 的 output 变为账本上的当前状态的一部分
交易的有效性¶
Each required signers should only sign the transaction if the following two conditions hold:
Transaction validity: For both the proposed transaction, and every transaction in the chain of transactions that created the current proposed transaction’s inputs:
- The transaction is digitally signed by all the required parties
- The transaction is contractually valid (see Contracts)
Transaction uniqueness: There exists no other committed transaction that has consumed any of the inputs to our proposed transaction (see 共识)
每一个被要求的签名方应该只有在满足以下两个条件的时候才应该提供签名:
If the transaction gathers all the required signatures but these conditions do not hold, the transaction’s outputs will not be valid, and will not be accepted as inputs to subsequent transactions.
如果一个 transaction 获得了所有所需的签名,但是以上的条件并没有满足的话,这次 transaction 的 outputs 将会是无效的,也不会被之后的新的 transaction 用来作为 input。
参考 states¶
As mentioned in States, some states need to be referred to by the contracts of other input or output states but not updated/consumed. This is where reference states come in. When a state is added to the references list of a transaction, instead of the inputs or outputs list, then it is treated as a reference state. There are two important differences between regular states and reference states:
- The specified notary for the transaction does check whether the reference states are current. However, reference states are not consumed when the transaction containing them is committed to the ledger.
- The contracts for reference states are not executed for the transaction containing them.
正如 States 所描述的,一些 states 需要被其他的 input 或者 output states 的合约代码所引用,但是不需要被修改/消费。这就需要参考 states。当一个 state 被添加到一笔交易的参考 states 列表中,而不是 inputs 或者 outputs 列表的时候,那么它就被作为 参考 state。在常规的 states 和参考 states 间有两点区别:
- 交易的 节点 指定的 notary 会检查参考 state 是不是当前的。然而,当包含他们的交易被提交的账本的时候,参考 states 是不会被消费的。
- 对于参考 states 的合约代码也不会被包含他们的交易所执行。
其他的交易组件¶
As well as input states and output states, transactions contain:
- Commands
- Attachments
- Time-Window
- Notary
就像 input states 和 output states 一样,transactions 还可能会包含下边的组件:
- Commands
- Attachments
- Timestamps
- Notary
For example, suppose we have a transaction where Alice uses a £5 cash payment to pay off £5 of an IOU with Bob. This transaction has two supporting attachments and will only be notarised by NotaryClusterA if the notary pool receives it within the specified time-window. This transaction would look as follows:
比如一个交易中,Alice 使用 £5 的现金向 Bob 支付了一个 IOU 的 £5。该笔交易包含了两个附件,并且只能够在 notary pool 在指定的时间窗内收到该笔交易的时候被 NotaryClusterA 进行公证,看起来像下边这样:

We explore the role played by the remaining transaction components below.
下边我们看一下剩下的交易组件扮演的角色。
Commands¶
Suppose we have a transaction with a cash state and a bond state as inputs, and a cash state and a bond state as outputs. This transaction could represent two different scenarios:
假设我们有一个 transaction,其中包含了一个现金 state 和一个债券 state 作为 inputs,一个现金 state 和一个债券 state 作为 outputs。这个 transaction 可以代表两种情况:
- A bond purchase
- A coupon payment on a bond
- 购买债券
- 使用优惠券来支付债券
We can imagine that we’d want to impose different rules on what constitutes a valid transaction depending on whether this is a purchase or a coupon payment. For example, in the case of a purchase, we would require a change in the bond’s current owner, whereas in the case of a coupon payment, we would require that the ownership of the bond does not change.
我们可以假设我们要根据这是一个购买债券的交易还是一个使用优惠券支付的交易来制定不同的验证交易的规则。例如,针对购买债券的情况,我们会要求更改债券当前的所有者,但是对于一个使用优惠券付款的情况,我们不会要求改变债券的所有人。
For this, we have commands. Including a command in a transaction allows us to indicate the transaction’s intent, affecting how we check the validity of the transaction.
为了实现这个,我们使用 commands。在 transaction 中包含一个 command 允许我们能够表示 transaction 的意图,影响我们如何来验证 transaction 有效性。
Each command is also associated with a list of one or more signers. By taking the union of all the public keys listed in the commands, we get the list of the transaction’s required signers. In our example, we might imagine that:
- In a coupon payment on a bond, only the owner of the bond is required to sign
- In a cash payment, only the owner of the cash is required to sign
每一个命令也会关联一个或多个 签名人。通过在 commands 中列出的所有的公钥信息,我们就知道了这个 transaction 里所有要求的签名人的列表。在我们这个例子中,我们可以认为:
- 对于使用优惠券购买债券的情况,只有债券的所有者需要提供签名
- 对于一个现金支付的情况,只有现金的所有者需要提供给签名
We can visualize this situation as follows:
我们可以通过下图表示这个情况:

Attachments¶
Sometimes, we have a large piece of data that can be reused across many different transactions. Some examples:
- A calendar of public holidays
- Supporting legal documentation
- A table of currency codes
有些时候,我们会有一些数据可以在不同的 transactions 中被重用。比如:
- 一个公共假期的 calendar
- 支持的法律文档
- 一个货币代码的表格
For this use case, we have attachments. Each transaction can refer to zero or more attachments by hash. These attachments are ZIP/JAR files containing arbitrary content. The information in these files can then be used when checking the transaction’s validity.
针对这些情况,我们使用 附件。一个 transaction 可以通过 hash 引用 0 个或者多个附件。这些附件是 ZIP/JAR 文件,可以包含任何内容。这些附件中信息可以用来验证 transaction 的有效性。
Time-window¶
In some cases, we want a transaction proposed to only be approved during a certain time-window. For example:
- An option can only be exercised after a certain date
- A bond may only be redeemed before its expiry date
一些时候,我们希望一个交易仅仅在一个指定的时间点被批准执行。例如:
- 在一个指定的日期之后执行一个选项
- 一个债券只能在它的过期日期前被赎回
In such cases, we can add a time-window to the transaction. Time-windows specify the time window during which the transaction can be committed. We discuss time-windows in the section on Time-windows.
在这些情况下,我们给 transaction 添加一个 time-window。time-windows 制定了交易会在哪个时间点被提交。我们在 Time-windows 讨论了 time-windows。
Notary¶
A notary pool is a network service that provides uniqueness consensus by attesting that, for a given transaction, it has not already signed other transactions that consume any of the proposed transaction’s input states. The notary pool provides the point of finality in the system.
一个 Notary pool 是一个提供唯一性共识的网络服务,通过证明对于一个指定的新的交易提案的 inputs,不会有任何该服务之前提供过签名的交易已经消费掉该 inputs。Notary pool 在这个系统中提供了终结点。
Note that if the notary entity is absent then the transaction is not notarised at all. This is intended for issuance/genesis transactions that don’t consume any other states and thus can’t double spend anything. For more information on the notary services, see Notaries.
注意如果 notary 实体缺失的话,交易是完全不能被公证的。这个是为 issuance/genesis 交易,这类的交易不会消费任何其他的 states,因此不能够重复消费任何 states,因此不会产生双花。更多关于 notary 服务的信息,请查看 Notaries。
Contracts¶
概要
- A valid transaction must be accepted by the contract of each of its input and output states
- Contracts are written in a JVM programming language (e.g. Java or Kotlin)
- Contract execution is deterministic and its acceptance of a transaction is based on the transaction’s contents alone
- 一个有效的 transaction 必须要被它的所有 input 和 output states中的 contract 接受
- Contracts 需要使用 JVM 编程语言编写(java 或者 kotlin)
- Contract 的执行是一定要有一个确定性结果的,并且它对于一个 transaction 的接受是仅仅基于 transaction 的内容
Transaction 验证¶
Recall that a transaction is only valid if it is digitally signed by all required signers. However, even if a transaction gathers all the required signatures, it is only valid if it is also contractually valid.
一个 transaction 仅仅当被所有要求的签名方提供了签名之后才会被认为是有效的。但是,除了获得到所有人的签名之后,还必须要满足 合约有效 才会被最终认为有效。
Contract validity is defined as follows:
- Each transaction state specifies a contract type
- A contract takes a transaction as input, and states whether the transaction is considered valid based on the contract’s rules
- A transaction is only valid if the contract of every input state and every output state considers it to be valid
合约有效 的定义包含以下几点:
- 每个 state 都指定了一个 合约 类别
- 一个 合约 将一个 transaction 作为输入,并且介于合约的规则来声明一个 transaction 是否被认为是有效的
- 一个 transaction 会在 每个 input state 和 每个 output state 中定义的合约都认为它是有效的情况下,才会被认为是有效的
We can picture this situation as follows:
我们可以用下图来描述这个关系:

The contract code can be written in any JVM language, and has access to the full capabilities of the language, including:
- Checking the number of inputs, outputs, commands, time-window, and/or attachments
- Checking the contents of any of these components
- Looping constructs, variable assignment, function calls, helper methods, etc.
- Grouping similar states to validate them as a group (e.g. imposing a rule on the combined value of all the cash states)
Contract code 可以用任何的 JVM 语言编写,并且具有该语言的所有能力,包括:
- 检查 inputs,outputs,commands 的数量,时间,和/或者 附件
- 检查这些组件中的任何一个的内容
- 循环一个结构,给变量赋值,调用方法,帮助函数等等
- 将一些类似的 states 分组来验证(比如对于所有的现金 state 的组合定义一个规则)
A transaction that is not contractually valid is not a valid proposal to update the ledger, and thus can never be committed to the ledger. In this way, contracts impose rules on the evolution of states over time that are independent of the willingness of the required signers to sign a given transaction.
一个 transaction 如果不是合约有效的话,是不会被视为一个对账本的有效更新,也就不可能被提交至账本。通过这种方式,合约会通过定义规则来不断地更新 states 的状态,并且每次更新必须要获得所有相关的签名。
The contract sandbox¶
Transaction verification must be deterministic - a contract should either always accept or always reject a given transaction. For example, transaction validity cannot depend on the time at which validation is conducted, or the amount of information the peer running the contract holds. This is a necessary condition to ensure that all peers on the network reach consensus regarding the validity of a given ledger update.
Transaction 验证必须是 一个确定性的结果 - 一个 contract 必须或者 总是接受,或者 总是拒绝 一个给定的 transaction。比如 transaction 是否有效不能够取决于你在什么时间做的 verify 或者是基于某一方具有的信息量的多少来决定是有效的还是无效的。这是一个很重要的条件来确保网络上的相关节点能够在这个对账本的更新的操作达成共识。
Future versions of Corda will evaluate transactions in a strictly deterministic sandbox. The sandbox has a whitelist that prevents the contract from importing libraries that could be a source of non-determinism. This includes libraries that provide the current time, random number generators, libraries that provide filesystem access or networking libraries, for example. Ultimately, the only information available to the contract when verifying the transaction is the information included in the transaction itself.
未来版本的 Corda 将会以一个严格的确定性的 sandbox 来考量交易。这个 sandbox 有一个白名单(whitelist)能够用来防止引入一些会造成不确定性的外部库。这些库包括提供当前日期的,产生随机数的,提供访问系统文件的或者访问网络的。本质上来讲,当验证一个 transaction 的时候,对于 contract 的信息只应该来自于 transaction 自身的信息。
Developers can pre-verify their CorDapps are determinsitic by linking their CorDapps against the deterministic modules (see the Deterministic Corda Modules).
开发者可以通过将他们的 CorDapps 和确定模块相连接的方式来预先验证他们的 CorDaps 是结果确定的(查看:doc:确定性的 Corda 模块 <deterministic-modules>)。
Contract 的局限性¶
Since a contract has no access to information from the outside world, it can only check the transaction for internal validity. It cannot check, for example, that the transaction is in accordance with what was originally agreed with the counterparties.
因为 contract 没有办法访问到外部的信息,它只能检查 transaction 内部的有效性,比如它不能够检查确认当前这个 transaction 是不是已经同其他相关方达成了共识取得了其他方的确认。
Peers should therefore check the contents of a transaction before signing it, even if the transaction is contractually valid, to see whether they agree with the proposed ledger update. A peer is under no obligation to sign a transaction just because it is contractually valid. For example, they may be unwilling to take on a loan that is too large, or may disagree on the amount of cash offered for an asset.
所以在各方提供最终的签名确认之前,各方应该对transaction 的内容进行检查来确定他们是否同意这个对账本的更新,即使这个 transaction 是合约有效的。任何一方是不会因为 transaction 是 contractually valid 就能够去提供签名。比如他们可能不愿意去提供一个巨额的借款,或者可能不会同意购买一个资产花费的钱的金额。
Oracles¶
Sometimes, transaction validity will depend on some external piece of information, such as an exchange rate. In these cases, an oracle is required. See Oracles for further details.
有的时候 transaction validity 需要取决于一些外部的信息,比如兑换汇率。这种情况下就需要使用 oracle 了。查看 Oracles 了解更多信息。
Legal prose¶
Each contract also refers to a legal prose document that states the rules governing the evolution of the state over time in a way that is compatible with traditional legal systems. This document can be relied upon in the case of legal disputes.
每一个合约也会引用一个 legal prose 文档,这个文档中定义了合约中规定的内容,legal prose 也会被传统的法律系统所接受。这个文档会在发生法律纠纷的时候被用来进行判定依据。
Flows¶
概要
- Flows automate the process of agreeing ledger updates
- Communication between nodes only occurs in the context of these flows, and is point-to-point
- Built-in flows are provided to automate common tasks
- Flows 使同意更新账本的流程变得自动化
- 节点之间的沟通只能够在这些 Flows 的上下文中发生,并且是点对点的
- 内置的 flows 提供了常用的一些任务
动机¶
Corda networks use point-to-point messaging instead of a global broadcast. This means that coordinating a ledger update requires network participants to specify exactly what information needs to be sent, to which counterparties, and in what order.
Corda 网络使用点对点的消息传输而不是全局广播。也就是说协调一个关于账本的更新需要网络上的参与者明确的指定需要发送什么信息,发送给谁,按照什么顺序发送。
Here is a visualisation of the process of agreeing a simple ledger update between Alice and Bob:
下边的图片动态地展示了对于一个简单在 Alice 和 Bob 之间的同意账本更新的流程。
Flow 框架¶
Rather than having to specify these steps manually, Corda automates the process using flows. A flow is a sequence of steps that tells a node how to achieve a specific ledger update, such as issuing an asset or settling a trade.
Corda 使用了 flows 来使上边的步骤变得自动化而不是手动地来处理这些步骤。一个 flow 是一系列有顺序的步骤来告诉一个节点应该如何实现一个指定的账本更新,比如发行一个资产或者结算一笔交易。
Here is the sequence of flow steps involved in the simple ledger update above: 下边是一个上边图片所描述的简单账本更新所涉及到的顺序的流程:

运行 flows¶
Once a given business process has been encapsulated in a flow and installed on the node as part of a CorDapp, the node’s owner can instruct the node to kick off this business process at any time using an RPC call. The flow abstracts all the networking, I/O and concurrency issues away from the node owner.
一旦一个业务流程被封装在了一个 flow 中并且在节点中作为 CorDapp 的一部分被安装好之后,节点的所有者可以在任何时间通过使用一个 RPC call 来告诉节点开始这个业务流程。Flow 将所有的网络,I/O 和并发问题都抽象了出来,这个节点 owner 就不需要关注这些了。
All activity on the node occurs in the context of these flows. Unlike contracts, flows do not execute in a sandbox, meaning that nodes can perform actions such as networking, I/O and use sources of randomness within the execution of a flow.
节点上所有的动作都是发生在这些 flows 的上下文上的。与 contract 不同,flows 不是在 sandbox 里执行的,也就是说节点可以在执行一个 flow 的过程中来进行一些动作比如 networking,I/O 或者随机地使用一些资源。
节点内部通信¶
Nodes communicate by passing messages between flows. Each node has zero or more flow classes that are registered to respond to messages from a single other flow.
节点间是通过在不同的 flows间传递消息来进行沟通的。每个节点有0个或者多个注册的 flow classes 来回复另外个一个单独的 flow 的消息。
Suppose Alice is a node on the network and wishes to agree a ledger update with Bob, another network node. To communicate with Bob, Alice must:
- Start a flow that Bob is registered to respond to
- Send Bob a message within the context of that flow
- Bob will start its registered counterparty flow
假设 Alice 是网络中的一个节点,并且她希望同 Bob(网络中的另一个节点) 一起同意一次账本的更新。为了跟 Bob 进行沟通, Alice 必须:
- 开始一个 Bob 已经注册过的 flow
- Alice 在这个 flow 的上下文中给 Bob 发送一个消息
- Bob 会启动它注册的这个 conterparty flow
Now that a connection is established, Alice and Bob can communicate to agree a ledger update by passing a series of messages back and forth, as prescribed by the flow steps.
连接已经建立起来了,Alice 和 Bob 就可以像 flow 步骤中描述的那样来回地沟通关于一个更新账本的改动并且最终达成一致。
Subflows¶
Flows can be composed by starting a flow as a subprocess in the context of another flow. The flow that is started as a subprocess is known as a subflow. The parent flow will wait until the subflow returns.
Flows 可以通过在另外一个 flow 的上下文中开始一个新的 flow 座位一个子流程的方式被组成。作为子流程被启动的 Flow 被称为 subflow。父 flow 需要等待所有的 subflow 完成后才会继续运行。
Flow 类库¶
Corda provides a library of flows to handle common tasks, meaning that developers do not have to redefine the logic behind common processes such as:
- Notarising and recording a transaction
- Gathering signatures from counterparty nodes
- Verifying a chain of transactions
Corda 对于一些常规的任务都提供了一套代码库,所以开发者就不需要自己去定义这些常见流程背后的逻辑了,比如:
- 公正和记录一个 transaction
- 从相关节点搜集签名
- 确认一个交易链
Further information on the available built-in flows can be found in API: Flows.
更多关于内置的 flows 的信息能够在 API: Flows 中找到。
并发¶
The flow framework allows nodes to have many flows active at once. These flows may last days, across node restarts and even upgrades.
Flow 框架允许节点可以同时运行多个 flows。这些 flows 可能由于节点的重启甚至升级会持续几天。
This is achieved by serializing flows to disk whenever they enter a blocking state (e.g. when they’re waiting on I/O or a networking call). Instead of waiting for the flow to become unblocked, the node immediately starts work on any other scheduled flows, only returning to the original flow at a later date.
这个可以通过在 flow 变成阻塞的状态的时候,将 flows 序列化到硬盘中的方式来实现(比如他们在等待 I/O 或者是网络的调用)。出现这种情况的时候,节点不会等待这个阻塞状态的 flow变成非阻塞的状态,而会立即运行其他的 flow,只会在稍后返回到原来这个阻塞的flow。
共识¶
概要
- To be committed, transactions must achieve both validity and uniqueness consensus
- Validity consensus requires contractual validity of the transaction and all its dependencies
- Uniqueness consensus prevents double-spends
- 为了交易能够被提交,transaction 必须要同时满足有效性和 唯一性的共识
- 有效性共识需要 transaction 和 它的所有依赖都是合约有效的
- 唯一性共识可以避免“双花”
两种类型的共识¶
Determining whether a proposed transaction is a valid ledger update involves reaching two types of consensus:
- Validity consensus - this is checked by each required signer before they sign the transaction
- Uniqueness consensus - this is only checked by a notary service
判断一个交易的提案是否是一次有效的账本更新要达到两种类型的共识:
- 有效性共识:这给是交易所要求的签名者在提供他们签名之前去校验的
- 唯一性共识:这个只会被 notary service 去验证
有效性共识¶
Validity consensus is the process of checking that the following conditions hold both for the proposed transaction, and for every transaction in the transaction chain that generated the inputs to the proposed transaction:
- The transaction is accepted by the contracts of every input and output state
- The transaction has all the required signatures
有效性共识是关于验证下边所描述的条件对于提交的 transaction 和生成该该 transaction 的 inputs 的交易链中的每次 transaction 都必须要满足:
- Transaction 中的每个 input 和 output 的 contracts 所接受
- Transaction 得到了所有要求的签名
It is not enough to verify the proposed transaction itself. We must also verify every transaction in the chain of transactions that led up to the creation of the inputs to the proposed transaction.
仅仅检查交易提案本身信息是不够的。我们还需要检查跟产生当前这个 transaction 的 inputs 有关的所有以前的 transaction 链。
This is known as walking the chain. Suppose, for example, that a party on the network proposes a transaction transferring us a treasury bond. We can only be sure that the bond transfer is valid if:
- The treasury bond was issued by the central bank in a valid issuance transaction
- Every subsequent transaction in which the bond changed hands was also valid
这个被称作 walking the chain。假设,例如网络中的一个节点提交了一个交换债券的一笔交易。我们只有了解下边的情况才能确保这个债券的交换是有效的:
- 这个债券应该是由中心银行发行的,而且应该是在一次有效的发行交易中
- 关于这个债券的后续交易记录也应该都是有效的
The only way to be sure of both conditions is to walk the transaction’s chain. We can visualize this process as follows:
确保两点都满足的唯一方式就是查看整个交易链。我们可以用下图表示:

When verifying a proposed transaction, a given party may not have every transaction in the transaction chain that they need to verify. In this case, they can request the missing transactions from the transaction proposer(s). The transaction proposer(s) will always have the full transaction chain, since they would have requested it when verifying the transaction that created the proposed transaction’s input states.
当确认一个交易提案的时候,给定的一方可能没有它需要验证的交易链上的所有交易信息。这种情况下,他可以向交易的提出方索要缺少的那部分交易。交易的提出方应该永远会有整个的交易链信息,因为他们应该在验证之前的交易中已经获取了相关的交易链信息。
唯一性共识¶
Imagine that Bob holds a valid central-bank-issued cash state of $1,000,000. Bob can now create two transaction proposals:
- A transaction transferring the $1,000,000 to Charlie in exchange for £800,000
- A transaction transferring the $1,000,000 to Dan in exchange for €900,000
设想一下 Bob 持有有效的由中央银行发行的 $1,000,000 现金 state。Bob 可以创建两个交易提案:
- 一笔交易要跟 Charlie 用这 $1,000,000 交换 £800,000
- 一笔交易要跟 Dan 用这 $1,000,000 交换 €900,000
This is a problem because, although both transactions will achieve validity consensus, Bob has managed to “double-spend” his USD to get double the amount of GBP and EUR. We can visualize this as follows:
这会是一个问题,因为尽管这两笔交易都可以通过有效性共识,但是 Bob 确实现了一次“双花 double spend” 他的美元来获得了两倍价值的 GBP 和 EUR。我们可以用下图表示这个流程:

To prevent this, a valid transaction proposal must also achieve uniqueness consensus. Uniqueness consensus is the requirement that none of the inputs to a proposed transaction have already been consumed in another transaction.
为了避免这样的问题发生,一个有效的交易提案同时也要满足唯一性共识。唯一性共识要求一个 transaction 的 input 不能被任何其他的 transaction 消费掉过。
If one or more of the inputs have already been consumed in another transaction, this is known as a double spend, and the transaction proposal is considered invalid.
当一个交易中的一个或多个 inputs 已经被其他的交易消费掉的情况,通常被称为 双花,那么相关的交易应该被视为无效的交易。
Uniqueness consensus is provided by notaries. See Notaries for more details.
唯一性共识是有 notaries 提供的。查看 Notaries 了解更多详细信息。
Notaries¶
概要
- Notary clusters prevent “double-spends”
- Notary clusters are also time-stamping authorities. If a transaction includes a time-window, it can only be notarised during that window
- Notary clusters may optionally also validate transactions, in which case they are called “validating” notaries, as opposed to “non-validating”
- A network can have several notary clusters, each running a different consensus algorithm
- Notary 集群避免 “双花”
- Notary 集群也可以是时间戳授权。如果一笔交易包含一个 time-window,那么它只能在这个 time-window 内被公证
- Notary 集群也可以可选地用来验证交易,在这种情况下他们被称为 “用于验证” 的 notaries,相对于 “非验证” 的 notaries
- 一个网络中可以有多个 notaries,每一个 notary 运行一个不同的共识算法
概览¶
A notary cluster is a network service that provides uniqueness consensus by attesting that, for a given transaction, it has not already signed other transactions that consumes any of the proposed transaction’s input states.
一个 notary 集群 是一个网络服务,通过证明一个给定的交易的 input 是没有被其他的交易消费过的方式提供了 唯一性共识。
Upon being sent asked to notarise a transaction, a notary cluster will either:
- Sign the transaction if it has not already signed other transactions consuming any of the proposed transaction’s input states
- Reject the transaction and flag that a double-spend attempt has occurred otherwise
当被要求为一笔交易进行公证的时候,一个 notary 集群会进行下边两种操作中的一种:
- 如果对于给定的交易中的 input,没有任何其他的交易已经消费该 input 的时候,会提供签名
- 拒绝这笔交易并且标明产生了双花的情况
In doing so, the notary cluster provides the point of finality in the system. Until the notary cluster’s signature is obtained, parties cannot be sure that an equally valid, but conflicting, transaction will not be regarded as the “valid” attempt to spend a given input state. However, after the notary cluster’s signature is obtained, we can be sure that the proposed transaction’s input states have not already been consumed by a prior transaction. Hence, notarisation is the point of finality in the system.
通过这样做,notary 集群就在系统中提供了一个终结点。在最终获得 notary 集群的签名之前,交易各方并不能确定交易的有效性。但是当收到了 notary 集群的签名之后,我们可以确认的是,交易中的 Input 是没有被其他任何的交易所消费过的。因此公证(notarisation)在系统里是最后的一步。
Every state has an appointed notary cluster, and a notary cluster will only notarise a transaction if it is the appointed notary cluster of all the transaction’s input states.
每个 state 都会有一个指定的 notary 集群,而且一个 notary 集群也只会去公正那些 input 指定它为 notary 集群的 transaction。
共识算法¶
Corda has “pluggable” consensus, allowing notary clusters to choose a consensus algorithm based on their requirements in terms of privacy, scalability, legal-system compatibility and algorithmic agility.
Corda 拥有一套 “可插拔 pluggable” 的共识,允许 notary 集群根据不同的需求(私有化、扩展性、法律系统的兼容性和算法的便捷性)来选择一种共识算法。
In particular, notary clusters may differ in terms of:
- Structure - a notary cluster may be a single node, several mutually-trusting nodes, or several mutually-distrusting nodes
- Consensus algorithm - a notary cluster may choose to run a high-speed, high-trust algorithm such as RAFT, a low-speed, low-trust algorithm such as BFT, or any other consensus algorithm it chooses
特别的,notary 集群可能含有下边的不同:
- 结构 - 一个 notary 集群可能是一个单独的网络节点,或者是互相信任的节点集群,或者是互不信任的节点集群
- 共识算法 - 一个 notary 集群可能会选择运行一个高速,高信任的算法(比如 RAFT),或者一个低速低信任的算法(比如 BFT),又或者是任何其他的选择的共识算法
验证¶
A notary cluster must also decide whether or not to provide validity consensus by validating each transaction before committing it. In making this decision, it faces the following trade-off:
- If a transaction is not checked for validity (non-validating notary), it creates the risk of “denial of state” attacks, where a node knowingly builds an invalid transaction consuming some set of existing states and sends it to the notary cluster, causing the states to be marked as consumed
- If the transaction is checked for validity (validating notary), the notary will need to see the full contents of the transaction and its dependencies. This leaks potentially private data to the notary cluster
一个 notary 集群还需要选择是否在提交之前通过验证每个 transaction 的有效性来提供这种 有效性共识 服务。为了做出这个选择,他们需要面对下边的取舍问题:
- 如果一个 transaction 没有 被检查正确性(非验证 notary),那么这就增加了 “denial of state” 袭击的风险,指的就是某个节点知道这是一个不正确的 transaction 会消费到一些 states,然后该节点还是把这个 transaction 发送给 notary 集群,但是 notary 如果不进行正确性验证的话,会把这个 state 变为历史记录被消费掉,这显然是不正确的
- 如果 transaction 已经 被验证了正确与否(验证 notary),notary 需要查看该 transaction 的全部内容以及它的所有依赖。这就向 notary 暴露了一些潜在的隐私数据。
There are several further points to keep in mind when evaluating this trade-off. In the case of the non-validating model, Corda’s controlled data distribution model means that information on unconsumed states is not widely shared. Additionally, Corda’s permissioned network means that the notary cluster can store the identity of the party that created the “denial of state” transaction, allowing the attack to be resolved off-ledger.
当我们考量这些取舍的时候,有一个后续观点需要始终要考虑的。对于非验证模式,Corda 的控制的数据分布模型意味着未被消费的 states 不会被大面积的共享。另外, Corda 的 permissioned network 也意味着 notary 能够存储造成 “denial of state” transaction 的一方的身份信息,这就允许能够在账本外去解决掉这个袭击。
In the case of the validating model, the use of anonymous, freshly-generated public keys instead of legal identities to identify parties in a transaction limit the information the notary cluster sees.
对于验证模式,对于匿名的使用,使用新生成的公钥而不是使用法律的标识来标记一笔交易的各方也限制了 notary 集群能够看到的信息。
数据的可视性¶
Below is a summary of what specific transaction components have to be revealed to each type of notary:
下边是关于哪些特殊的交易组件必须要暴露给每种类型的 notary 的一个总结:
Transaction components | Validating | Non-validating |
---|---|---|
Input states | Fully visible | References only [1]_ |
Output states | Fully visible | Hidden |
Commands (with signer identities) | Fully visible | Hidden |
Attachments | Fully visible | Hidden |
Time window | Fully visible | Fully visible |
Notary identity | Fully visible | Fully visible |
Signatures | Fully visible | Hidden |
Both types of notaries record the calling party’s identity: the public key and the X.500 Distinguished Name.
两种类型的 notaries 都会记录调用方的身份信息:公钥以及 X.500 唯一的名字。
[1] | A state reference is composed of the issuing transaction’s id and the state’s position in the outputs. It does not reveal what kind of state it is or its contents. |
[1] | 一个 state 的引用是由生成它的 transaction 的 id 和这个 state 在 outputs 中的位置共同构成的。它不会暴露它是哪种类型的 state 或者它的内容这类信息。 |
多个 Notaries¶
Each Corda network can have multiple notary clusters, each potentially running a different consensus algorithm. This provides several benefits:
- Privacy - we can have both validating and non-validating notary clusters on the same network, each running a different algorithm. This allows nodes to choose the preferred notary cluster on a per-transaction basis
- Load balancing - spreading the transaction load over multiple notary clusters allows higher transaction throughput for the platform overall
- Low latency - latency can be minimised by choosing a notary cluster physically closer to the transacting parties
每个 Corda 网络可以存在多个 notary 集群,每个 notary 集群可能会运行一套不同的共识算法。这会带来以下的好处:
- 隐私性 - 我们可以在同一个网络中同时拥有验证和非验证的 notary 集群,每个集群运行着不同的算法。这就允许节点针对每个 transaction 来选择更喜欢的不同的 notary。
- 负载平衡 - 将 transaction 的工作分发给多个 notary 集群可以提高平台整体的交易吞吐量
- 低延迟 - 通过选择物理上离交易方最近的 notary 集群来获得最小化的延迟
更换 notaries¶
Remember that a notary cluster will only sign a transaction if it is the appointed notary cluster of all of the transaction’s input states. However, there are cases in which we may need to change a state’s appointed notary cluster. These include:
- When a single transaction needs to consume several states that have different appointed notary clusters
- When a node would prefer to use a different notary cluster for a given transaction due to privacy or efficiency concerns
一个 notary 集群只有当它是这个 transaction 里的所有 input states 指定的 notary 的情况下才可以提供签名。然而下边的情况可能需要换一个 state 的指定的 notary 集群,包括:
- 当一个 transaction 需要消费的 states 中指定了不同的 notary 集群
- 当一个节点因为隐私和效率的考虑希望选择一个不同的 notary 集群
Before these transactions can be created, the states must first all be re-pointed to the same notary cluster. This is achieved using a special notary-change transaction that takes:
- A single input state
- An output state identical to the input state, except that the appointed notary cluster has been changed
当这样的 transactions 被创建之前,states 必须首先被指定到同一个 notary 集群。这可以通过一个改变 notary 的 transaction 来实现:
- 单一的一个 input state
- 一个 output state 指定到这个 input state,除非指定的 notary 集群被改变了
The input state’s appointed notary cluster will sign the transaction if it doesn’t constitute a double-spend, at which point a state will enter existence that has all the properties of the old state, but has a different appointed notary cluster.
如果该 transaction 不会造成“双花”,这个 input state 指定的 notary 会为该 transaction 提供签名,这种情况下,一个 state 会进入到存在状态,它还有旧的 state 所具有的所有属性,但是会指向一个不同的 notary 集群。
Vault¶
Soft Locking¶
Soft Locking is implemented in the vault to try and prevent a node constructing transactions that attempt to use the same input(s) simultaneously. Such transactions would result in naturally wasted work when the notary rejects them as double spend attempts.
Soft locks are automatically applied to coin selection (eg. cash spending) to ensure that no two transactions attempt to
spend the same fungible states. The outcome of such an eventuality will result in an InsufficientBalanceException
for one
of the requesters if there are insufficient number of fungible states available to satisfy both requests.
注解
The Cash Contract schema table is now automatically generated upon node startup as Coin Selection now uses this table to ensure correct locking and selection of states to satisfy minimum requested spending amounts.
Soft locks are also automatically applied within flows that issue or receive new states. These states are effectively soft locked until flow termination (exit or error) or by explicit release.
In addition, the VaultService
exposes a number of functions a developer may use to explicitly reserve, release and
query soft locks associated with states as required by their CorDapp application logic:
/**
* Reserve a set of [StateRef] for a given [UUID] unique identifier.
* Typically, the unique identifier will refer to a [FlowLogic.runId]'s [UUID] associated with an in-flight flow.
* In this case if the flow terminates the locks will automatically be freed, even if there is an error.
* However, the user can specify their own [UUID] and manage this manually, possibly across the lifetime of multiple
* flows, or from other thread contexts e.g. [CordaService] instances.
* In the case of coin selection, soft locks are automatically taken upon gathering relevant unconsumed input refs.
*
* @throws [StatesNotAvailableException] when not possible to soft-lock all of requested [StateRef].
*/
@Throws(StatesNotAvailableException::class)
fun softLockReserve(lockId: UUID, stateRefs: NonEmptySet<StateRef>)
/**
* Release all or an explicitly specified set of [StateRef] for a given [UUID] unique identifier.
* A [Vault] soft-lock manager is automatically notified from flows that are terminated, such that any soft locked
* states may be released.
* In the case of coin selection, soft-locks are automatically released once previously gathered unconsumed
* input refs are consumed as part of cash spending.
*/
fun softLockRelease(lockId: UUID, stateRefs: NonEmptySet<StateRef>? = null)
Query¶
By default vault queries will always include locked states in its result sets.
Custom filterable criteria can be specified using the SoftLockingCondition
attribute of VaultQueryCriteria
:
@CordaSerializable
data class SoftLockingCondition(val type: SoftLockingType, val lockIds: List<UUID> = emptyList())
@CordaSerializable
enum class SoftLockingType {
UNLOCKED_ONLY, // only unlocked states
LOCKED_ONLY, // only soft locked states
SPECIFIED, // only those soft locked states specified by lock id(s)
UNLOCKED_AND_SPECIFIED // all unlocked states plus those soft locked states specified by lock id(s)
}
Explicit Usage¶
Soft locks are associated with transactions, and typically within the lifecycle of a flow. Specifically, every time a
flow is started a soft lock identifier is associated with that flow for its duration (and released upon it’s natural
termination or in the event of an exception). The VaultSoftLockManager
is responsible within the Node for
automatically managing this soft lock registration and release process for flows. The TransactionBuilder
class has a
new lockId
field for the purpose of tracking lockable states. By default, it is automatically set to a random
UUID
(outside of a flow) or to a flow’s unique ID (within a flow).
Upon building a new transaction to perform some action for a set of states on a contract, a developer must explicitly register any states they may wish to hold until that transaction is committed to the ledger. These states will be effectively ‘soft locked’ (not usable by any other transaction) until the developer explicitly releases these or the flow terminates or errors (at which point they are automatically released).
Use Cases¶
A prime example where soft locking is automatically enabled is within the process of issuance and transfer of fungible
state (eg. Cash). An issuer of some fungible asset (eg. Bank of Corda) may wish to transfer that new issue immediately
to the issuance requester (eg. Big Corporation). This issuance and transfer operation must be atomic such that another
flow (or instance of the same flow) does not step in and unintentionally spend the states issued by Bank of Corda
before they are transferred to the intended recipient. Soft locking will automatically prevent new issued states within
IssuerFlow
from being spendable by any other flow until such time as the IssuerFlow
itself terminates.
Other use cases for soft locking may involve competing flows attempting to match trades or any other concurrent activities that may involve operating on an identical set of unconsumed states.
The vault contains data extracted from the ledger that is considered relevant to the node’s owner, stored in a relational model that can be easily queried and worked with.
Vault 中存储的是跟节点的所有者相关的从账本上得到的数据,以关系方式模型存储以方便查询和使用。
The vault keeps track of both unconsumed and consumed states:
- Unconsumed (or unspent) states represent fungible states available for spending (including spend-to-self transactions) and linear states available for evolution (eg. in response to a lifecycle event on a deal) or transfer to another party.
- Consumed (or spent) states represent ledger immutable state for the purpose of transaction reporting, audit and archival, including the ability to perform joins with app-private data (like customer notes).
Vault 同时会追踪未消费掉的和已消费掉的 states:
- 未消费掉的 (或者未使用的) states 代表了可以用来花费的 fungible states (包括 spend-to-self 交易)以及可以用来更新的 linear states (比如对于一笔交易的生命周期)或者从一方转换给另一方。
- 已消费掉的 (或者已使用的) states 代表了为了交易报表、审计和归档的目的而在账本上存储的不可更改的 state,包括进行同 app-private 数据进行关联的能力(比如客户的 notes)。
By fungible we refer to assets of measurable quantity (eg. a cash currency, units of stock) which can be combined together to represent a single ledger state.
对于这种可替代性,我们通常是指可计算数量的资产能够被合并在一起来代表一个单独的账本上的 state。
Like with a cryptocurrency wallet, the Corda vault can create transactions that send value (eg. transfer of state) to someone else by combining fungible states and possibly adding a change output that makes the values balance (this process is usually referred to as ‘coin selection’). Vault spending ensures that transactions respect the fungibility rules in order to ensure that the issuer and reference data is preserved as the assets pass from hand to hand.
就像一个加密货币钱包,Corda 中的 vault 能够创建发送给其他人价值的交易(比如交换 state),这可以通过将 fungible states 进行合并,以及还可能添加一个变更 output 来调整余额(这个过程通常是指 “选择 coin”)。Vault 的花费确保了交易遵循了可互换性的规则,来确保发行方以及所参考的数据在资产交易的过程中被正确的存储。
A feature called soft locking provides the ability to automatically or explicitly reserve states to prevent multiple transactions within the same node from trying to use the same output simultaneously. Whilst this scenario would ultimately be detected by a notary, soft locking provides a mechanism of early detection for such unwarranted and invalid scenarios. Soft Locking describes this feature in detail.
一个称为 soft locking 的功能提供了自动或者显式地预定 states 而避免同一个节点尝试同时在多笔交易中使用相同的 output 的能力。这种情况最终会被一个 notary 发现,soft locking 提供了一种能够在早期就发现这种无根据和不正确的情况的机制。Soft Locking 提供了更详细的的关于这个功能的描述。
注解
Basic ‘coin selection’ is currently implemented. Future work includes fungible state optimisation (splitting and merging of states in the background), and ‘state re-issuance’ (sending of states back to the issuer for re-issuance, thus pruning long transaction chains and improving privacy).
注解
基础的 “选择 coin” 当前已经被实现了。未来的工作包括 fungible state 优化(在后台拆分和合并资产),以及 “再发行 state”(将 states 发送回给发行方进行再发行,因此可以修剪很长的交易链并且改进隐私性)。
There is also a facility for attaching descriptive textual notes against any transaction stored in the vault.
这里也有能够在任何存储在 vault 里的交易附加一个描述性的文本的功能。
The vault supports the management of data in both authoritative (“on-ledger”) form and, where appropriate, shadow (“off-ledger”) form:
- “On-ledger” data refers to distributed ledger state (cash, deals, trades) to which a firm is participant.
- “Off-ledger” data refers to a firm’s internal reference, static and systems data.
Vault 支持管理需授权的(“on-ledger”)的数据,也可以管理 shadow(“off-ledger”)形式的数据:
- “On-ledger” 数据指的是分发的 ledger state (现金、交易),一个公司会参与其中。
- “Off-ledger” 数据指的是公司内部的参考数据、静态或者系统数据。
The following diagram illustrates the breakdown of the vault into sub-system components:
下边的图表展示了将 vault 拆分为子系统组件:

Note the following:
- The vault “On Ledger” store tracks unconsumed state and is updated internally by the node upon recording of a transaction on the ledger (following successful smart contract verification and signature by all participants).
- The vault “Off Ledger” store refers to additional data added by the node owner subsequent to transaction recording.
- The vault performs fungible state spending (and in future, fungible state optimisation management including merging, splitting and re-issuance).
- Vault extensions represent additional custom plugin code a developer may write to query specific custom contract state attributes.
- Customer “Off Ledger” (private store) represents internal organisational data that may be joined with the vault data to perform additional reporting or processing.
- A Vault Query API is exposed to developers using standard Corda RPC and CorDapp plugin mechanisms.
- A vault update API is internally used by transaction recording flows.
- The vault database schemas are directly accessible via JDBC for customer joins and queries.
注意以下几点:
- Vault “On Ledger” 存储并追踪未消费掉的 state,并且在将一笔交易记录到账本的时候由节点内部进行更新(会按照成功执行了智能合约验证以及受到所有参与方的签名)
- Vault “Off Ledger” 存储了交易记录以外节点的所有者添加的额外的数据
- Vault 对 fungible state 进行了花费(并且在将来,fungible state 的优化管理包括合并、拆分以及再发行)。
- Vault 扩展代表了开发者可以编写的额外的自定义 plugin 代码,用来查询指定的自定义 contract state 属性。
- 客户的 “Off Ledger”(私有的存储)代表了内部的组织型数据,可能被用来跟 vault 数据进行关联来进行额外的报表或者处理。
- Vault Query API 可以使用标准的 Corda RPC 和 CorDapp plugin 机制暴露给开发者。
- Vault 更新 API 可以被交易记录的 flows 内部使用。
- Vault 数据库 schemas 可以通过 JDBC 和自定义的 joins 和查询进行直接地访问。
Section 8 of the Technical white paper describes features of the vault yet to be implemented including private key management, state splitting and merging, asset re-issuance and node event scheduling.
Technical white paper 的第 8 部分描述了 vault 尚未实现的一些功能,包括私钥的管理、state 的拆分及合并、资产的再发行以及节点事件的日程安排。
Time-windows¶
概要
- If a transaction includes a time-window, it can only be committed during that window
- The notary is the timestamping authority, refusing to commit transactions outside of that window
- Time-windows can have a start and end time, or be open at either end
- 如果一个 transaction 包含了一个 time-window,那么这个 transaction 只能在这个 time-window 里被提交
- Notary 具有控制发生的时间的权利,当在 time-window 之外的时候,notary 可以拒绝提交 transaction
- Time-window 可以有开始和结束时间,或者只有两者之中的一个
分布式系统中的时间¶
A notary also act as the timestamping authority, verifying that a transaction occurred during a specific time-window before notarising it.
Notary 也可以作为 时间戳的验证者,在它确认一笔交易前,需要确保这笔交易是发生在指定的时间窗里。
For a time-window to be meaningful, its implications must be binding on the party requesting it. A party can obtain a time-window signature in order to prove that some event happened before, on, or after a particular point in time. However, if the party is not also compelled to commit to the associated transaction, it has a choice of whether or not to reveal this fact until some point in the future. As a result, we need to ensure that the notary either has to also sign the transaction within some time tolerance, or perform timestamping and notarisation at the same time. The latter is the chosen behaviour for this model.
为了让一个时间窗有意义,它必须要在一方请求它的时候被绑定。一方可以获得一个 time-window 的签名,以此来证明有些事件是在特定时间点 之前、当时 或者 之后 发生的。然而,如果交易参与者不能够在指定的 time-window 内提交到相关的交易,它可以选择是否在未来的某个时间点将这个事实暴露出去。因此,我们需要确保 notary 或者能够在一些可容错的时间范围内对交易进行签名,或者同时进行打时间戳 和 对交易进行公证。后边的这种方式是这个模型中使用的方式。
There will never be exact clock synchronisation between the party creating the transaction and the notary. This is not only due to issues of physics and network latency, but also because between inserting the command and getting the notary to sign there may be many other steps (e.g. sending the transaction to other parties involved in the trade, requesting human sign-off…). Thus the time at which the transaction is sent for notarisation may be quite different to the time at which the transaction was created.
在创建交易的一方和 notary 之间是无法实现时间的同步的。这并不仅仅是因为物理或者网络的延迟,还会因为在插入命令和获得 notary 签名之间可能会发生很多其他的步骤(比如发送交易到涉及到的其他节点,请求人工的审批等)。所以交易被发送到 notary 的时间和交易创建的时间可能会不同。
Time-windows¶
For this reason, times in transactions are specified as time windows, not absolute times. In a distributed system there can never be “true time”, only an approximation of it. Time windows can be open-ended (i.e. specify only one of “before” and “after”) or they can be fully bounded.
因为这个原因,交易中涉及到的时间会被制定为一个时间 窗,而不是一个绝对的时间。在一个分布式系统中是永远不会有 “真实的时间” 的,只有一个大概的时间。时间窗可以是开放的(比如在某个时间点后,或者某个时间点之前)或者是一个闭合的范围。
In this way, we express the idea that the true value of the fact “the current time” is actually unknowable. Even when both a before and an after time are included, the transaction could have occurred at any point within that time-window.
通过这种方式,我们表达了我们的想法,就是 “当前的时间” 永远都是未知的。甚至当在某个时间之前和之后都被包含的时候,交易也可能会在那个时间窗中的任何时间发生。
By creating a range that can be either closed or open at one end, we allow all of the following situations to be modelled:
- A transaction occurring at some point after the given time (e.g. after a maturity event)
- A transaction occurring at any time before the given time (e.g. before a bankruptcy event)
- A transaction occurring at some point roughly around the given time (e.g. on a specific day)
通过在一端创建一个关闭或者开放的范围,我们允许用以下的方式生成时间窗模型:
- 一笔交易在指定时间之后的某个时间发生(比如在一个终止事件之后)
- 一笔交易在指定时间之前的任何时间发生(比如破产事件之前)
- 一笔交易在指定时间前后的某个时间发生(比如在指定的某一天)
If a time window needs to be converted to an absolute time (e.g. for display purposes), there is a utility method to calculate the mid point.
如果一个时间窗需要被转换成一个绝对的时间(比如为了显示的原因),这里会有一个 utility 的方法来计算一个中间点时间。
注解
It is assumed that the time feed for a notary is GPS/NaviStar time as defined by the atomic clocks at the US Naval Observatory. This time feed is extremely accurate and available globally for free.
注解
我们假设对于一个 notary 的 time feed 是由在 US Naval Observatory 的原子时钟所定义的 GPS/NaviStart 时间。这个 time feed 是完全准确的并且可以免费地在全球使用。
Oracles¶
概要
- A fact can be included in a transaction as part of a command
- An oracle is a service that will only sign the transaction if the included fact is true
- 一个事实(fact)可以作为 command 的一部分被添加到一个 transaction 中
- 一个 oracle 是一个服务,它只会为那些包含正确事实的 transaction 提供签名
概览¶
In many cases, a transaction’s contractual validity depends on some external piece of data, such as the current exchange rate. However, if we were to let each participant evaluate the transaction’s validity based on their own view of the current exchange rate, the contract’s execution would be non-deterministic: some signers would consider the transaction valid, while others would consider it invalid. As a result, disagreements would arise over the true state of the ledger.
很多时候 transaction 的合约有效性需要依赖一些外部的数据,比如当前的汇率是多少。如果让每个参与方给予他们对于汇率的观点来验证 transaction 的有效性的话,合约的执行就会变得没有确定性了:一些参与者可能会认为 transaction 是有效的,而其他的参与者可能认为无效。因此,在真正账本中的 state 之上就会提出一些不同的意见。
Corda addresses this issue using oracles. Oracles are network services that, upon request, provide commands that encapsulate a specific fact (e.g. the exchange rate at time x) and list the oracle as a required signer.
Corda 通过使用 Oracle 来解决这个问题。Oracle 是一个网络服务,可以根据要求提供包含某一事实的命令(比如在某个时间的汇率)并且将 Oracle 列为要求签名的一方。
If a node wishes to use a given fact in a transaction, they request a command asserting this fact from the oracle. If the oracle considers the fact to be true, they send back the required command. The node then includes the command in their transaction, and the oracle will sign the transaction to assert that the fact is true.
如果一个节点希望在一个 transaction 中使用某一个事实,那么它可以提出从 Oracle 来获取该事实的一个命令。如果 Orale 认为这个事实是正确的,它会返回这个要求的命令。然后这个节点就可以把这个命令添加到 transaction 中了,然后 oracle 会为这个事实是真的提供签名。
For privacy purposes, the oracle does not require to have access on every part of the transaction and the only information it needs to see is their embedded, related to this oracle, command(s). We should also provide guarantees that all of the commands requiring a signature from this oracle should be visible to the oracle entity, but not the rest. To achieve that we use filtered transactions, in which the transaction proposer(s) uses a nested Merkle tree approach to “tear off” the unrelated parts of the transaction. See Transaction tear-offs for more information on how transaction tear-offs work.
为了隐私性的目的,Oracle 不需要能够访问交易的每个部分,他们唯一需要的信息就是看到他们内置的、跟这个 Oracle 相关的 command(s)。我们也应该提供让这些需要提供签名的 Oracle 实体能够看到这些 commands 的保证,但是不包括其他的部分。为了实现这个,我们使用过滤过的交易,是指交易的提案方使用一个内嵌的默克尔树的方式来将一些非相关的交易的部分隐藏掉。查看 Transaction tear-offs 了解关于交易如何拿掉工作的详细信息。
If they wish to monetize their services, oracles can choose to only sign a transaction and attest to the validity of the fact it contains for a fee.
如果他们想为他们的服务定价,Oracles 可以选择只为那些包含服务费的交易提供签名并证明它包含的事实的有效性。
节点¶
概要
A node is JVM run-time with a unique network identity running the Corda software
The node has two interfaces with the outside world:
- A network layer, for interacting with other nodes
- RPC, for interacting with the node’s owner
The node’s functionality is extended by installing CorDapps in the plugin registry
一个节点是指运行着 Corda 软件的具有唯一标识的一个 JVM 运行时
节点对于外部世界包含两个接口:
- 网络层,用来同其他的节点通信
- RPC,为了跟节点的所有者通信
节点的功能是通过在 plugin registry 里安装 CorDapps 方式来扩展的
节点架构¶
A Corda node is a JVM run-time environment with a unique identity on the network that hosts Corda services and CorDapps.
Corda 中的节点指的是在网络中具有唯一标识的运行着 Corda 服务和 CorDapps 的 JVM 运行时环境。
We can visualize the node’s internal architecture as follows:
下边是节点的内部架构图:

The core elements of the architecture are:
- A persistence layer for storing data
- A network interface for interacting with other nodes
- An RPC interface for interacting with the node’s owner
- A service hub for allowing the node’s flows to call upon the node’s other services
- A cordapp interface and provider for extending the node by installing CorDapps
架构中的核心元素包括:
- 存储数据的持久化层
- 同其他节点沟通的网络接口
- 同节点的所有者进行沟通的 RPC 接口
- 允许节点的 flows 来调用节点其他服务的 service hub
- plugin registry 用来通过安装 CorDapps 来扩展节点
持久层¶
The persistence layer has two parts:
- The vault, where the node stores any relevant current and historic states
- The storage service, where it stores transactions, attachments and flow checkpoints
持久层包含两部分:
- Vault,节点用来存储相关的当前和历史的 states 数据
- 存储服务,用来存储 transaction, attachment 和 flow checkpoints
The node’s owner can query the node’s storage using the RPC interface (see below).
节点的所有者可以通过使用 RPC 接口来查询节点的 storage。
网络接口¶
All communication with other nodes on the network is handled by the node itself, as part of running a flow. The node’s owner does not interact with other network nodes directly.
同网络中的其他节点进行沟通是节点自己来处理的,作为运行一个 flow 的一部分。节点的所有者不会直接地同网络中其他的节点进行交互。
RPC 接口¶
The node’s owner interacts with the node via remote procedure calls (RPC). The key RPC operations the node exposes are documented in API: RPC 操作.
节点的所有者是通过使用 Remote Procedure Calls(RPC) 来跟节点进行交互的。关键的节点暴露的 RPC 操作可以查看 API: RPC 操作。
The service hub¶
Internally, the node has access to a rich set of services that are used during flow execution to coordinate ledger updates. The key services provided are:
- Information on other nodes on the network and the services they offer
- Access to the contents of the vault and the storage service
- Access to, and generation of, the node’s public-private keypairs
- Information about the node itself
- The current time, as tracked by the node
在节点内部,节点可以在 flow 的执行过程中访问丰富的服务来协助更新账本。主要的服务包括:
- 网络中的其他节点以及提供什么服务的信息
- 访问 vault 和存储服务的内容
- 访问和生成节点的公钥私钥对
- 节点本身的信息
- 节点追踪的,当前的时间
CorDapp 提供者¶
The CorDapp provider is where new CorDapps are installed to extend the behavior of the node.
CorDapp 提供者是新的 CorDapps 被安装的地方,来扩展节点的行为。
The node also has several CorDapps installed by default to handle common tasks such as:
- Retrieving transactions and attachments from counterparties
- Upgrading contracts
- Broadcasting agreed ledger updates for recording by counterparties
节点默认会安装一些 CorDapps 来处理一些常见的任务,比如:
- 从合作方那边获得交易和附件信息
- 更新合约
- 向交易其他放广播同意的账本更新信息
排空节点模式¶
In order to operate a clean shutdown of a node, it is important than no flows are in-flight, meaning no checkpoints should be persisted. The node is able to be put in draining mode, during which:
- Commands requiring to start new flows through RPC will be rejected.
- Scheduled flows due will be ignored.
- Initial P2P session messages will not be processed, meaning peers will not be able to initiate new flows involving the node.
- All other activities will proceed as usual, ensuring that the number of in-flight flows will strictly diminish.
为了执行一次干净的关闭节点操作,没有正在执行的 flows 非常重要,也就是说应该没有任何的 checkpoints 被持久化。节点能够被设置为排空状态,在这个状态中:
- 通过 RPC 要求的启动新的 flows 的命令会被拒绝
- 预约的 flows 会被忽略
- 初始化 P2P 的会话消息将不会被处理,意味着 peers 将不能够初始化新的 flows
- 其他所有的活动还会照常进行,来确保正在执行的 flows 的数量在不断减少。
As their number - which can be monitored through RPC - reaches zero, it is safe to shut the node down. This property is durable, meaning that restarting the node will not reset it to its default value and that a RPC command is required.
对于他们的数量 - 可以通过 RPC 来进行监控 - 达到0,那么就是安全的了,可以进行关闭节点的操作了。这个属性是持久的,也就是说重新启动这个节点也不会重置这个值到默认和值,并且需要一个 RPC 命令。
The node can be safely shut down via a drain using the shell.
节点可以使用 shell 来被排空然后安全地关闭。
Transaction tear-offs¶
Summary
- Hide transaction components for privacy purposes
- Oracles and non-validating notaries can only see their “related” transaction components, but not the full transaction details
Overview¶
There are cases where some of the entities involved on the transaction could only have partial visibility on the transaction parts. For instance, when an oracle should sign a transaction, the only information it needs to see is their embedded, related to this oracle, command(s). Similarly, a non-validating notary only needs to see a transaction’s input states. Providing any additional transaction data to the oracle would constitute a privacy leak.
To combat this, we use the concept of filtered transactions, in which the transaction proposer(s) uses a nested Merkle tree approach to “tear off” any parts of the transaction that the oracle/notary doesn’t need to see before presenting it to them for signing. A Merkle tree is a well-known cryptographic scheme that is commonly used to provide proofs of inclusion and data integrity. Merkle trees are widely used in peer-to-peer networks, blockchain systems and git.
The advantage of a Merkle tree is that the parts of the transaction that were torn off when presenting the transaction to the oracle cannot later be changed without also invalidating the oracle’s digital signature.
Transaction Merkle trees¶
A Merkle tree is constructed from a transaction by splitting the transaction into leaves, where each leaf contains either an input, an output, a command, or an attachment. The final nested tree structure also contains the other fields of the transaction, such as the time-window, the notary and the required signers. As shown in the picture below, the only component type that is requiring two trees instead of one is the command, which is split into command data and required signers for visibility purposes.
Corda is using a patent-pending approach using nested Merkle trees per component type. Briefly, a component sub-tree is generated for each component type (i.e., inputs, outputs, attachments). Then, the roots of these sub-trees form the leaves of the top Merkle tree and finally the root of this tree represents the transaction id.
Another important feature is that a nonce is deterministically generated for each component in a way that each nonce is independent. Then, we use the nonces along with their corresponding components to calculate the component hash, which is the actual Merkle tree leaf. Nonces are required to protect against brute force attacks that otherwise would reveal the content of low-entropy hashed values (i.e., a single-word text attachment).
After computing the leaves, each Merkle tree is built in the normal way by hashing the concatenation of nodes’ hashes
below the current one together. It’s visible on the example image below, where H
denotes sha256 function, “+” - concatenation.

The transaction has three input states, two output states, two commands, one attachment, a notary and a time-window. Notice that if a tree is not a full binary tree, leaves are padded to the nearest power of 2 with zero hash (since finding a pre-image of sha256(x) == 0 is hard computational task) - marked light green above. Finally, the hash of the root is the identifier of the transaction, it’s also used for signing and verification of data integrity. Every change in transaction on a leaf level will change its identifier.
Hiding data¶
Hiding data and providing the proof that it formed a part of a transaction is done by constructing partial Merkle trees (or Merkle branches). A Merkle branch is a set of hashes, that given the leaves’ data, is used to calculate the root’s hash. Then, that hash is compared with the hash of a whole transaction and if they match it means that data we obtained belongs to that particular transaction. In the following we provide concrete examples on the data visible to a an oracle and a non-validating notary, respectively.
Let’s assume that only the first command should be visible to an Oracle. We should also provide guarantees that all of the commands requiring a signature from this oracle should be visible to the oracle entity, but not the rest. Here is how this filtered transaction will be represented in the Merkle tree structure.

Blue nodes and H(c2)
are provided to the Oracle service, while the black ones are omitted. H(c2)
is required, so
that the Oracle can compute H(commandData)
without being to able to see the second command, but at the same time
ensuring CommandData1
is part of the transaction. It is highlighted that all signers are visible, so as to have a
proof that no related command (that the Oracle should see) has been maliciously filtered out. Additionally, hashes of
sub-trees (violet nodes) are also provided in the current Corda protocol. The latter is required for special cases, i.e.,
when required to know if a component group is empty or not.
Having all of the aforementioned data, one can calculate the root of the top tree and compare it with original transaction identifier - we have a proof that this command and time-window belong to this transaction.
Along the same lines, if we want to send the same transaction to a non-validating notary we should hide all components apart from input states, time-window and the notary information. This data is enough for the notary to know which input states should be checked for double-spending, if the time-window is valid and if this transaction should be notarised by this notary.

权衡¶
概要
- Permissioned networks are better suited for financial use-cases
- Point-to-point communication allows information to be shared need-to-know
- A UTXO model allows for more transactions-per-second
- 许可的网络会更好的适合金融的 user-cases
- 点对点的通信允许信息是基于需要知道的原则被共享
- UTXO model 允许每秒钟能够处理更多的 transactions
需要许可 vs 和不需要许可的¶
Traditional blockchain networks are permissionless. The parties on the network are anonymous, and can join and leave at will.
传统的 blockchain 是 不需要许可 的。网络中的各方都是匿名的,而且可以随时加入或离开。
By contrast, Corda networks are permissioned. Each party on the network has a known identity that they use when communicating with counterparties, and network access is controlled by a doorman. This has several benefits:
- Anonymous parties are inappropriate for most scenarios involving regulated financial institutions
- Knowing the identity of your counterparties allows for off-ledger resolution of conflicts using existing legal systems
- Sybil attacks are averted without the use of expensive mechanisms such as proof-of-work
不同的是, Corda 网络是 需要许可 的。网络中的每一方都有一个大家都知道的标识,这个会在同其他节点进行沟通的时候使用,并且访问网络是由一个 doorman 来控制的。这有一下的好处:
- 匿名的用户对于大多数跟金融有关的情况都是不适用的
- 知道你的合作方的身份可以允许当出现冲突的时候,可以使用已经存在的法律系统在账本外进行解决
- 女巫攻击(Sybil attacks)可以不通过使用昂贵的机制(比如工作量证明 proof-of-work)来避免
点对点 vs 全局广播¶
Traditional blockchain networks broadcast every message to every participant. The reason for this is two-fold:
- Counterparty identities are not known, so a message must be sent to every participant to ensure it reaches its intended recipient
- Making every participant aware of every transaction allows the network to prevent double-spends
传统的 blockchain networks 将每一条信息广播给网络上的所有参与者。原因是:
- 合作方的身份是不知道的,所以一条消息需要发给网络上的所有人来确保原本需要收到这条消息的接受者能够接收到
- 让所有参与者知道每一个 transaction 能够允许网络防止“双花”
The downside is that all participants see everyone else’s data. This is unacceptable for many use-cases.
不好的地方是所有的参与者都能看到所有其他人的数据。这在很多的 use-cases 是无法接受的。
In Corda, each message is instead addressed to a specific counterparty, and is not seen by any uninvolved third parties. The developer has full control over what messages are sent, to whom, and in what order. As a result, data is shared on a need-to-know basis only. To prevent double-spends in this system, we employ notaries as an alternative to proof-of-work.
在 Corda 中,每条消息都会指定一个具体的合作方,而且是不会被任何其他无关方看到的。开发者能够完全掌控什么消息被发送了,发送给了谁,应该按照什么顺序发送。所以 数据是根据需要知道的原则来共享的。为了避免“双花”,我们引入了 notaries 来替换掉工作量证明(proof-of-work)。
Corda also uses several other techniques to maximize privacy on the network:
- Transaction tear-offs: Transactions are structured in a way that allows them to be digitally signed without disclosing the transaction’s contents. This is achieved using a data structure called a Merkle tree. You can read more about this technique in Transaction tear-offs.
- Key randomisation: The parties to a transaction are identified only by their public keys, and fresh key pairs are generated for each transaction. As a result, an onlooker cannot identify which parties were involved in a given transaction.
Corda 也是用了其他的一些技术来最大化的包括网络上的隐私:
- Transaction 隐藏:Transactions 被结构化成不暴露 transaction 的内容就可以被数字化地签名。这个是通过使用一种叫默克尔树的数据结构来实现的。可以阅读 Transaction tear-offs 来了解更详细的内容。
- 随机化秘钥:一个 transaction 的所有参与方是通过他们的公钥进行识别的,并且针对每一个 transaction 都会生成 一个新的 keypairs。所以一个监视者无法识别出来对于一个给定的 transaction 都哪些方参与了。
UTXO vs. 账户模型¶
Corda uses a UTXO (unspent transaction output) model. Each transaction consumes a set of existing states to produce a set of new states.
Corda 使用 *UTXO*(Unspent Transaction Output)model。每个 transaction 都会消费一系列的已经存在的 state 然后再生成一些新的 states。
The alternative would be an account model. In an account model, stateful objects are stored on-ledger, and transactions take the form of requests to update the current state of these objects.
相反的一种方式是 账户 模型。在账户模型中,stateful 对象被存在账本上,transaction 会通过请求的方式来对这些对象的当前的 state 进行更新。
The main advantage of the UTXO model is that transactions with different inputs can be applied in parallel, vastly increasing the network’s potential transactions-per-second. In the account model, the number of transactions-per-second is limited by the fact that updates to a given object must be applied sequentially.
UTXO 模型的主要优点在于含有不同的 inputs 的 transactions 能够并行地被执行,很大程度上地增加了网络中每秒能够处理的 transactions。在账户模型中,每秒钟能够处理的 transactions 数量有限,因为对于一个给定的 object 的更新需要按照给定的顺序来执行。
代码即法律 vs. 既有的法律系统¶
Financial institutions need the ability to resolve conflicts using the traditional legal system where required. Corda is designed to make this possible by:
- Having permissioned networks, meaning that participants are aware of who they are dealing with in every single transaction
- All code contracts should include a
LegalProseReference
link to the legal document describing the contract’s intended behavior which can be relied upon to resolve conflicts
金融体系需要在需要的时候使用传统的法律体系来解决冲突的能力。Corda 被设计用来使这个成为可能:
- 拥有需要准入的网络,意味着所有参与方都能够知道在每一个 transaction 中他们都在跟谁打交道
- 所有代码合约背后都存在有描述着合约意图行为的法律文档,这个文档可以在解决冲突的时候使用
构建 vs. 重用¶
Wherever possible, Corda re-uses existing technologies to make the platform more robust platform overall. For example, Corda re-uses:
- Standard JVM programming languages for the development of CorDapps
- Existing SQL databases
- Existing message queue implementations
任何可能的情况,Corda 会使用 已经存在的技术来让这个平台更加的健壮。比如 Corda 重用了:
- 标准的 JVM 变成语言来开发 CorDapps
- 已经存在的 SQL database
- 已经存在的 消息队列实现
Deterministic JVM¶
目录
Introduction¶
It is important that all nodes that process a transaction always agree on whether it is valid or not. Because transaction types are defined using JVM byte code, this means that the execution of that byte code must be fully deterministic. Out of the box a standard JVM is not fully deterministic, thus we must make some modifications in order to satisfy our requirements.
So, what does it mean for a piece of code to be fully deterministic? Ultimately, it means that the code, when viewed as a function, is pure. In other words, given the same set of inputs, it will always produce the same set of outputs without inflicting any side-effects that might later affect the computation.
重要
The code in the DJVM module has not yet been integrated with the rest of the platform. It will eventually become a part of the node and enforce deterministic and secure execution of smart contract code, which is mobile and may propagate around the network without human intervention.
Currently, it stands alone as an evaluation version. We want to give developers the ability to start trying it out and get used to developing deterministic code under the set of constraints that we envision will be placed on contract code in the future.
Non-Determinism¶
For a program running on the JVM, non-determinism could be introduced by a range of sources, for instance:
- External input, e.g., the file system, network, system properties and clocks.
- Random number generators.
- Halting criteria, e.g., different decisions about when to terminate long running programs.
- Hash-codes, or more specifically
Object.hashCode()
, which is typically implemented either by returning a pointer address or by assigning the object a random number. This could, for instance, surface as different iteration orders over hash maps and hash sets, or be used as non-pure input into arbitrary expressions.- Differences in hardware floating point arithmetic.
- Multi-threading and consequent differences in scheduling strategies, affinity, etc.
- Differences in API implementations between nodes.
- Garbage collector callbacks.
To ensure that the contract verification function is fully pure even in the face of infinite loops we want to use a custom-built JVM sandbox. The sandbox performs static analysis of loaded byte code and a rewriting pass to allow for necessary instrumentation and constraint hardening.
The byte code rewriting further allows us to patch up and control the default behaviour of things like the hash-code
generation for java.lang.Object
. Contract code is rewritten the first time it needs to be executed and then stored
for future use.
Abstraction¶
The sandbox is abstracted away as an executor which takes as input an implementation of the interface
Function<in Input, out Output>
, dereferenced by a ClassSource
. This interface has a single method that
needs implementing, namely apply(Input): Output
.
A ClassSource
object referencing such an implementation can be passed into the SandboxExecutor<in Input, out
Output>
together with an input of type Input
. The executor has operations for both execution and static
validation, namely run()
and validate()
. These methods both return a summary object.
- In the case of execution, this summary object has information about:
- Whether or not the runnable was successfully executed.
- If successful, the return value of
Function.apply()
.- If failed, the exception that was raised.
- And in both cases, a summary of all accrued costs during execution.
- For validation, the summary contains:
- A type hierarchy of classes and interfaces loaded and touched by the sandbox’s class loader during analysis, each of which contain information about the respective transformations applied as well as meta-data about the types themselves and all references made from said classes.
- A list of messages generated during the analysis. These can be of different severity, and only messages of severity
ERROR
will prevent execution.
The sandbox has a configuration that applies to the execution of a specific runnable. This configuration, on a higher level, contains a set of rules, definition providers and emitters.

The set of rules is what defines the constraints posed on the runtime environment. A rule can act on three different
levels, namely on a type-, member- or instruction-level. The set of rules get processed and validated by the
RuleValidator
prior to execution.
Similarly, there is a set of definition providers which can be used to modify the definition of either a type or a type’s members. This is what controls things like ensuring that all methods implement strict floating point arithmetic, and normalisation of synchronised methods.
Lastly, there is a set of emitters. These are used to instrument the byte code for cost accounting purposes, and also
to inject code for checks that we want to perform at runtime or modifications to out-of-the-box behaviour. Many of
these emitters will rewrite non-deterministic operations to throw RuleViolationError
exceptions instead, which
means that the ultimate proof that a function is truly deterministic is that it executes successfully inside the DJVM.
Static Byte Code Analysis¶
In summary, the byte code analysis currently performs the following checks. This is not an exhaustive list as further work may well introduce additional constraints that we would want to place on the sandbox environment.
注解
It is worth noting that not only smart contract code is instrumented by the sandbox, but all code that it can transitively reach. In particular this means that the Java runtime classes and any other library code used in the program are also instrumented and persisted ahead of time.
Disallow Catching ThreadDeath Exception¶
Prevents exception handlers from catching ThreadDeath
exceptions. If the developer attempts to catch an Error
or a Throwable
(both being transitive parent types of ThreadDeath
), an explicit check will be injected into the
byte code to verify that exceptions that are trying to kill the current thread are not being silenced. Consequently,
the user will not be able to bypass an exit signal.
Disallow Catching ThresholdViolationException¶
The ThresholdViolationException
is, as the name suggests, used to signal to the sandbox that a cost tracked by the
runtime cost accountant has been breached. For obvious reasons, the sandbox needs to protect against user code that
tries to catch such exceptions, as doing so would allow the user to bypass the thresholds set out in the execution
profile.
Disallow Dynamic Invocation¶
Forbids invokedynamic
byte code as the libraries that support this functionality have historically had security
problems and it is primarily needed only by scripting languages. In the future, this constraint will be eased to allow
for dynamic invocation in the specific lambda and string concatenation meta-factories used by Java code itself.
Disallow Native Methods¶
Forbids native methods as these provide the user access into operating system functionality such as file handling, network requests, general hardware interaction, threading, etc. These all constitute sources of non-determinism, and allowing such code to be called arbitrarily from the JVM would require deterministic guarantees on the native machine code level. This falls out of scope for the DJVM.
Disallow Finalizer Methods¶
Forbids finalizers as these can be called at unpredictable times during execution, given that their invocation is controlled by the garbage collector. As stated in the standard Java documentation:
Called by the garbage collector on an object when garbage collection determines that there are no more references to the object.
Disallow Overridden Sandbox Package¶
Forbids attempts to override rewritten classes. For instance, loading a class com.foo.Bar
into the sandbox,
analyses it, rewrites it and places it into sandbox.com.foo.Bar
. Attempts to place originating classes in the
top-level sandbox
package will therefore fail as this poses a security risk. Doing so would essentially bypass rule
validation and instrumentation.
Disallow Breakpoints¶
For obvious reasons, the breakpoint operation code is forbidden as this can be exploited to unpredictably suspend code execution and consequently interfere with any time bounds placed on the execution.
Disallow Reflection¶
For now, the use of reflection APIs is forbidden as the unmanaged use of these can provide means of breaking out of the protected sandbox environment.
Disallow Unsupported API Versions¶
Ensures that loaded classes are targeting an API version between 1.5 and 1.8 (inclusive). This is merely to limit the breadth of APIs from the standard runtime that needs auditing.
Runtime Costing¶
The runtime accountant inserts calls to an accounting object before expensive byte code. The goal of this rewrite is to deterministically terminate code that has run for an unacceptably long amount of time or used an unacceptable amount of memory. Types of expensive byte code include method invocation, memory allocation, branching and exception throwing.
The cost instrumentation strategy used is a simple one: just counting byte code that are known to be expensive to execute. The methods can be limited in size and jumps count towards the costing budget, allowing us to determine a consistent halting criteria. However it is still possible to construct byte code sequences by hand that take excessive amounts of time to execute. The cost instrumentation is designed to ensure that infinite loops are terminated and that if the cost of verifying a transaction becomes unexpectedly large (e.g., contains algorithms with complexity exponential in transaction size) that all nodes agree precisely on when to quit. It is not intended as a protection against denial of service attacks. If a node is sending you transactions that appear designed to simply waste your CPU time then simply blocking that node is sufficient to solve the problem, given the lack of global broadcast.
The budgets are separate per operation code type, so there is no unified cost model. Additionally the instrumentation is high overhead. A more sophisticated design would be to calculate byte code costs statically as much as possible ahead of time, by instrumenting only the entry point of ‘accounting blocks’, i.e., runs of basic blocks that end with either a method return or a backwards jump. Because only an abstract cost matters (this is not a profiler tool) and because the limits are expected to bet set relatively high, there is no need to instrument every basic block. Using the max of both sides of a branch is sufficient when neither branch target contains a backwards jump. This sort of design will be investigated if the per category budget accounting turns out to be insufficient.
A further complexity comes from the need to constrain memory usage. The sandbox imposes a quota on bytes allocated rather than bytes retained in order to simplify the implementation. This strategy is unnecessarily harsh on smart contracts that churn large quantities of garbage yet have relatively small peak heap sizes and, again, it may be that in practice a more sophisticated strategy that integrates with the garbage collector is required in order to set quotas to a usefully generic level.
注解
The current thresholds have been set arbitrarily for demonstration purposes and should not be relied upon as sensible defaults in a production environment.
Instrumentation and Rewriting¶
Always Use Strict Floating Point Arithmetic¶
Sets the strictfp
flag on all methods, which requires the JVM to do floating point arithmetic in a hardware
independent fashion. Whilst we anticipate that floating point arithmetic is unlikely to feature in most smart contracts
(big integer and big decimal libraries are available), it is available for those who want to use it.
Always Use Exact Math¶
Replaces integer and long addition and multiplication with calls to Math.addExact()
and Math.multiplyExact
,
respectively. Further work can be done to implement exact operations for increments, decrements and subtractions as
well. These calls into java.lang.Math
essentially implement checked arithmetic over integers, which will throw an
exception if the operation overflows.
Always Inherit From Sandboxed Object¶
As mentioned further up, Object.hashCode()
is typically implemented using either the memory address of the object
or a random number; which are both non-deterministic. The DJVM shields the runtime from this source of non-determinism
by rewriting all classes that inherit from java.lang.Object
to derive from sandbox.java.lang.Object
instead.
This sandboxed Object
implementation takes a hash-code as an input argument to the primary constructor, persists it
and returns the value from the hashCode()
method implementation. It also has an overridden implementation of
toString()
.
The loaded classes are further rewritten in two ways:
- All allocations of new objects of type
java.lang.Object
get mapped into using the sandboxed object.- Calls to the constructor of
java.lang.Object
get mapped to the constructor ofsandbox.java.lang.Object
instead, passing in a constant value for now. In the future, we can easily have this passed-in hash-code be a pseudo random number seeded with, for instance, the hash of the transaction or some other dynamic value, provided of course that it is deterministically derived.
Disable Synchronised Methods and Blocks¶
The DJVM doesn’t support multi-threading and so synchronised methods and code blocks have little use in sandboxed code. Consequently, we automatically transform them into ordinary methods and code blocks instead.
Future Work¶
Further work is planned:
- To enable controlled use of reflection APIs.
- Currently, dynamic invocation is disallowed. Allow specific lambda and string concatenation meta-factories used by Java code itself.
- Map more mathematical operations to use their ‘exact’ counterparts.
- General tightening of the enforced constraints.
- Cost accounting of runtime metrics such as memory allocation, branching and exception handling. More specifically defining sensible runtime thresholds and make further improvements to the instrumentation.
- More sophisticated runtime accounting as discussed in Runtime Costing.
Command-line Tool¶
Open your terminal and navigate to the djvm
directory in the Corda source tree. Then issue the following command:
$ djvm/shell/install
This will build the DJVM tool and install a shortcut on Bash-enabled systems. It will also generate a Bash completion
file and store it in the shell
folder. This file can be sourced from your Bash initialisation script.
$ cd ~
$ djvm
Now, you can create a new Java file from a skeleton that djvm
provides, compile the file, and consequently run it
by issuing the following commands:
$ djvm new Hello
$ vim tmp/net/corda/sandbox/Hello.java
$ djvm build Hello
$ djvm run Hello
This run will produce some output similar to this:
Running class net.corda.sandbox.Hello...
Execution successful
- result = null
Runtime Cost Summary:
- allocations = 0
- invocations = 1
- jumps = 0
- throws = 0
The output should be pretty self-explanatory, but just to summarise:
- It prints out the return value from the
Function<Object, Object>.apply()
method implemented innet.corda.sandbox.Hello
.- It also prints out the aggregated costs for allocations, invocations, jumps and throws.
Other commands to be aware of are:
djvm check
which allows you to perform some up-front static analysis without running the code. However, be aware that the DJVM also transforms some non-deterministic operations intoRuleViolationError
exceptions. A successfulcheck
therefore does not guarantee that the code will behave correctly at runtime.djvm inspect
which allows you to inspect what byte code modifications will be applied to a class.djvm show
which displays the transformed byte code of a class, i.e., the end result and not the difference.
The detailed thinking and rationale behind these concepts are presented in two white papers:
在这些概念背后的详细的思想和基本原理在下边的两个白皮书里有表述:
- Corda: 介绍
- Corda: 一个分布式账本 (A.K.A. 技术白皮书)
Explanations of the key concepts are also available as videos.
关于核心概念的解释,也可以观看这些 视频。
CorDapps¶
什么是 CorDapp?¶
CorDapps (Corda Distributed Applications) are distributed applications that run on the Corda platform. The goal of a CorDapp is to allow nodes to reach agreement on updates to the ledger. They achieve this goal by defining flows that Corda node owners can invoke over RPC:
CorDapps(Corda 分布式应用 Corda Distributed Applications)是在 Corda 平台上运行的分布式应用程序。CorDapp 的目标是允许节点间达成协议来更新账本。为了实现这个目标,Corda 定义了可以由 Corda 节点所有者通过 RPC 调用的 flows:

CorDapp 组件¶
CorDapps take the form of a set of JAR files containing class definitions written in Java and/or Kotlin.
CorDapps 是由一系列的包含使用 Java 和/或者 Kotlin 编写的类定义的 JAR 文件形式构成的。
These class definitions will commonly include the following elements:
- Flows: Define a routine for the node to run, usually to update the ledger
(see Key Concepts - Flows). They subclass
FlowLogic
- States: Define the facts over which agreement is reached (see Key Concepts - States).
They implement the
ContractState
interface - Contracts, defining what constitutes a valid ledger update (see
Key Concepts - Contracts). They implement the
Contract
interface - Services, providing long-lived utilities within the node. They subclass
SingletonSerializationToken
- Serialisation whitelists, restricting what types your node will receive off the wire. They implement the
SerializationWhitelist
interface
这些类定义通常包含下边的元素:
- Flows:定义了节点的日常工作,通常是更新账本(查看 核心概念 - Flows)。他们是
FlowLogic
的子类 - States:定义了要达成协议的事实(查看 核心概念 - States)。他们实现了
ContractState
接口 - Contracts:定义了构成有效账本更新的内容(查看 核心概念 - Contracts)。他们实现了
Contract
接口 - Services:在节点中提供了长期的 utilities。他们是
SingletonSerializationToken
的子类 - 序列化白名单:限制了你的节点会接收什么样的信息。他们实现了
SerializationWhitelist
接口
But the CorDapp JAR can also include other class definitions. These may include:
- APIs and static web content: These are served by Corda’s built-in webserver. This webserver is not production-ready, and should be used for testing purposes only
- Utility classes
但是 CorDapp JAR 也能够包含其他的类定义。可能包括:
- APIs 和静态 web 内容:这个是由 Corda 内置的 webserver 来服务的。这个 webserver 还不适合于生产环境,应该仅仅被用于测试的目的
- Utility 类
一个例子¶
Suppose a node owner wants their node to be able to trade bonds. They may choose to install a Bond Trading CorDapp with the following components:
A
BondState
, used to represent bonds as shared facts on the ledgerA
BondContract
, used to govern which ledger updates involvingBondState
states are validThree flows:
- An
IssueBondFlow
, allowing newBondState
states to be issued onto the ledger - A
TradeBondFlow
, allowing existingBondState
states to be bought and sold on the ledger - An
ExitBondFlow
, allowing existingBondState
states to be exited from the ledger
- An
假设一个节点的所有者想让他的节点能够交易债券。他可能会选择安装一个包含下边组件的债券交易的 CorDapp:
一个
BondState
,用来体现以共享的事实的形式存在于账本上的债券一个
BondContract
,用来证明哪些涉及到Bondstate
的账本更新是有效的三个 flows:
- 一个
IssueBondFlow
,允许新的BondState
可以被初始化发行到账本中 - 一个
TradeBondFlow
,允许对于一个已经存在的BondState
在账本中进行购买和出售 - 一个
ExitBondFlow
,允许将一个已经存在的BondState
从账本中清理掉
- 一个
After installing this CorDapp, the node owner will be able to use the flows defined by the CorDapp to agree ledger updates related to issuance, sale, purchase and exit of bonds.
当安装完这个 CorDapp 之后,节点的所有者便能够通过 CorDapp 定义的 flows 来同意关于发行、出售、购买和清理债券的一些列账本更新了。
同时在 Corda(开源版本)和 Corda 企业版上编写和构建应用程序¶
Corda and Corda Enterprise are compatible and interoperable, which means you can write a CorDapp that can run on both. To make this work in practice you should follow these steps:
Corda 和 Corda 企业版是兼容的,并且可以互操作,也就是说你可以编写一个 CorDapp 能够同时在两个 Corda 中运行。你需要遵循下边的步骤来实现这个:
- Ensure your CorDapp is designed per Structuring a CorDapp and annotated according to CorDapp separation. In particular, it is critical to separate the consensus-critical parts of your application (contracts, states and their dependencies) from the rest of the business logic (flows, APIs, etc). The former - the CorDapp kernel - is the Jar that will be attached to transactions creating/consuming your states and is the Jar that any node on the network verifying the transaction must execute.
注解
It is also important to understand how to manage any dependencies a CorDapp may have on 3rd party libraries and other CorDapps. Please read Setting your dependencies to understand the options and recommendations with regards to correctly Jar’ing CorDapp dependencies.
- Compile this CorDapp kernel Jar once, and then depend on it from your workflows Jar (or Jars - see below). Importantly, if you want your app to work on both Corda and Corda Enterprise, you must compile this Jar against Corda, not Corda Enterprise. This is because, in future, we may add additional functionality to Corda Enterprise that is not in Corda and you may inadvertently create a CorDapp kernel that does not work on Corda open source. Compiling against Corda open source as a matter of course prevents this risk, as well as preventing the risk that you inadvertently create two different versions of the Jar, which will have different hashes and hence break compatibility and interoperability.
注解
As of Corda 4 it is recommended to use CorDapp Jar signing to leverage the new signature constraints functionality.
- Your workflow Jar(s) should depend on the CorDapp kernel (contract, states and dependencies). Importantly, you can create different workflow Jars for Corda and Corda Enterprise, because the workflows Jar is not consensus critical. For example, you may wish to add additional features to your CorDapp for when it is run on Corda Enterprise (perhaps it uses advanced features of one of the supported enterprise databases or includes advanced database migration scripts, or some other Enterprise-only feature).
In summary, structure your app as kernel (contracts, states, dependencies) and workflow (the rest) and be sure to compile the kernel against Corda open source. You can compile your workflow (Jars) against the distribution of Corda that they target.
快速搭建 CorDapp 开发环境¶
软件需求¶
Corda uses industry-standard tools:
Corda 使用业界标准的工具:
- Java 8 JVM - we require at least version 8u171, but do not currently support Java 9 or higher. We have tested with Oracle JDK, Amazon Corretto, and Red Hat’s OpenJDK builds. Please note that OpenJDK builds usually exclude JavaFX, which our GUI tools require.
- IntelliJ IDEA - supported versions 2017.x and 2018.x (with Kotlin plugin version 1.2.71)
- Gradle - we use 4.10 and the
gradlew
script in the project / samples directories will download it for you. - Java 8 JVM - 我们要求最低版本为 8u171,但是当前还不支持 Java 9 或更高版本。我们测试过 Orable JDK,Amazon Corretoo 和 Read Hat 的 OpenJDK builds。注意 OpenJDK builds 通常会排除 JavaFX,这个是我们的 GUI 工具必须的。
- IntelliJ IDEA - 支持的版本 2017.x 和 2018.x (Kotlin plugin 版本 1.2.71)
- Gradle - 我们使用 4.10 并且在 project / samples 路径下的
gradlew
脚本会为你下载它。
Please note:
请注意:
- Applications on Corda (CorDapps) can be written in any language targeting the JVM. However, Corda itself and most of the samples are written in Kotlin. Kotlin is an official Android language, and you can read more about why Kotlin is a strong successor to Java here. If you’re unfamiliar with Kotlin, there is an official getting started guide, and a series of Kotlin Koans
- IntelliJ IDEA is recommended due to the strength of its Kotlin integration.
- 基于 Corda 的应用程序(CorDapps)能够使用任何能够运行在 JVM 中的语言来编写。但是 Corda 本身以及大多数的样例程序都是用 Kotlin 编写的。Kotlin 是 Android 官方语言,你可以 在这 阅读更多关于为什么 Kotlin 是 Java 的后继者。如果你对 Kotlin 还不熟悉的话,这里有一个官方的 指南,和一系列的 Kotlin Koans。
- 推荐使用 IntelliJ IDEA 因为它同 Kotlin 集成的优势。
Following these software recommendations will minimize the number of errors you encounter, and make it easier for others to provide support. However, if you do use other tools, we’d be interested to hear about any issues that arise.
使用这些推荐的软件开发 CorDapp 可以最小化产生问题的几率,也能够让他人提供支持的时候更加容易。然而,如果你在使用其他的工具,我们也希望能够听到你反馈的任何问题。
安装指导¶
The instructions below will allow you to set up your development environment for running Corda and writing CorDapps. If you have any issues, please reach out on Stack Overflow or via our Slack channels.
下边的指导内容可以告诉你如何配置一个 Corda 开发环境来运行和编写一个基本的 CorDapp。如果你遇到任何问题,可以在 Stack Overflow 或者 我们的 Slack channels 中提问。
The set-up instructions are available for the following platforms:
- windows-label (or in video form)
- Mac (or in video form)
- Debian/Ubuntu
- Fedora
安装指导包含以下平台:
- windows-label (or in video form)
- Mac (or in video form)
- Debian/Ubuntu
- Fedora
Windows¶
警告
If you are using a Mac, Debian/Ubuntu or Fedora machine, please follow the Mac, Debian/Ubuntu or Fedora instructions instead.
警告
如果你在使用 Mac、Debian/Ubuntu 或者 Fedora 机器,请按照 Mac、 Debian/Ubuntu 或者 Fedora 的指导来操作.
Java¶
- Visit http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
- Scroll down to “Java SE Development Kit 8uXXX” (where “XXX” is the latest minor version number)
- Toggle “Accept License Agreement”
- Click the download link for jdk-8uXXX-windows-x64.exe (where “XXX” is the latest minor version number)
- Download and run the executable to install Java (use the default settings)
- Add Java to the PATH environment variable by following the instructions at https://docs.oracle.com/javase/7/docs/webnotes/install/windows/jdk-installation-windows.html#path
- Open a new command prompt and run
java -version
to test that Java is installed correctly
- 访问 http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
- 到 “Java SE Development Kit 8uXXX”(XXX 表示最新的版本号)
- 选择 “Accept License Agreement”
- 点击 jdk-8uXXX-windows-x64.exe 的下载链接(XXX 表示最新的版本号)
- 下载并运行Java 的安装文件(使用默认设置)
- 跟着下边的指导将 Java 添加到 PATH 环境变量中 https://docs.oracle.com/javase/7/docs/webnotes/install/windows/jdk-installation-windows.html#path
- 打开一个新的命令窗口然后运行
java -version
来测试一下 Java 是否正确安装了
Git¶
- Visit https://git-scm.com/download/win
- Click the “64-bit Git for Windows Setup” download link.
- Download and run the executable to install Git (use the default settings)
- Open a new command prompt and type
git --version
to test that git is installed correctly
- 访问 https://git-scm.com/download/win
- 点击下载链接 “64-bit Git for Windows Setup”
- 下载并运行 Git 安装文件(使用默认设置)
- 打开一个新的命令窗口然后运行 git –version 来测试一下 Git 是否正确安装了
IntelliJ¶
- Visit https://www.jetbrains.com/idea/download/download-thanks.html?code=IIC
- Download and run the executable to install IntelliJ Community Edition (use the default settings)
- Ensure the Kotlin plugin in Intellij is updated to version 1.2.71
- 访问 https://www.jetbrains.com/idea/download/download-thanks.html?code=IIC
- 下载并运行 InteliJ Community Edition 安装文件(使用默认设置)
- 确保 Intellij 中的 Kotlin plugin 版本是 1.2.71
Mac¶
警告
If you are using a Windows, Debian/Ubuntu or Fedora machine, please follow the windows-label, Debian/Ubuntu or Fedora instructions instead.
警告
如果你在使用 Windows、Debian/Ubuntu 或者 Fedora 机器,请按照 windows-label、 Debian/Ubuntu 或者 Fedora 的指导来操作.
Java¶
- Visit http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
- Scroll down to “Java SE Development Kit 8uXXX” (where “XXX” is the latest minor version number)
- Toggle “Accept License Agreement”
- Click the download link for jdk-8uXXX-macosx-x64.dmg (where “XXX” is the latest minor version number)
- Download and run the executable to install Java (use the default settings)
- Open a new terminal window and run
java -version
to test that Java is installed correctly
- 访问 http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
- 到 “Java SE Development Kit 8uXXX”(XXX 表示最新的版本号)
- 选择 “Accept License Agreement”
- 点击 jdk-8uXXX-windows-x64.exe 的下载链接(XXX 表示最新的版本号)
- 下载并运行Java 的安装文件(使用默认设置)
- 打开一个新的命令窗口然后运行
java -version
来测试一下 Java 是否正确安装了
IntelliJ¶
- Visit https://www.jetbrains.com/idea/download/download-thanks.html?platform=mac&code=IIC
- Download and run the executable to install IntelliJ Community Edition (use the default settings)
- Ensure the Kotlin plugin in Intellij is updated to version 1.2.71
- 访问 https://www.jetbrains.com/idea/download/download-thanks.html?platform=mac&code=IIC
- 下载并运行 InteliJ Community Edition 安装文件(使用默认设置)
- 确保 Intellij 中的 Kotlin plugin 版本是 1.2.71
Debian/Ubuntu¶
警告
If you are using a Mac, Windows or Fedora machine, please follow the Mac, windows-label or Fedora instructions instead.
These instructions were tested on Ubuntu Desktop 18.04 LTS.
这个指导已经在 Ubuntu Desktop 18.04 LTS 中测试过。
Java¶
- Open a new terminal and add the Oracle PPA to your repositories by typing
sudo add-apt-repository ppa:webupd8team/java
. Press ENTER when prompted. - Update your packages list with the command
sudo apt update
- Install the Oracle JDK 8 by typing
sudo apt install oracle-java8-installer
. Press Y when prompted and agree to the licence terms. - Verify that the JDK was installed correctly by running
java -version
- 打开一个新的 terminal 并且通过输入
sudo add-apt-repository ppa:webupd8team/java
来将Oracle PPA 添加到你的 repositories。但弹出提示时,点击 ENTER。 - 通过命令
sudo apt update
更新你的包列表。 - 通过输入
sudo apt install oracle-java8-installer
安装 Oracle JDK 8。弹出提示时输入 Y 并且同意 licence 条款。 - 通过运行
java -version
确认 JDK 已经正确被安装
Git¶
- From the terminal, Git can be installed using apt with the command
sudo apt install git
- Verify that git was installed correctly by typing
git --version
- 在 terminal 中,Git 可以通过使用 apt 命令
sudo apt install git
来安装 - 通过运行
git --version
确认 git 已经被正确安装
IntelliJ¶
Jetbrains offers a pre-built snap package that allows for easy, one-step installation of IntelliJ onto Ubuntu.
- To download the snap, navigate to https://snapcraft.io/intellij-idea-community
- Click
Install
, thenView in Desktop Store
. ChooseUbuntu Software
in the Launch Application window. - Ensure the Kotlin plugin in Intellij is updated to version 1.2.71
为了在 Ubuntu 上更简单地,一步安装 IntelliJ,Jetbrains 提供了一个预建的 snap package。
- 浏览 https://snapcraft.io/intellij-idea-community 下载这个 snap
- 点击
Install
,然后View in Desktop Store
。在加载应用程序窗口中选择Ubuntu Software
。 - 确保在 IntelliJ 中的 Kotlin plugin 已经更新到版本 1.2.71
Fedora¶
警告
If you are using a Mac, Windows or Debian/Ubuntu machine, please follow the Mac, windows-label or Debian/Ubuntu instructions instead.
警告
如果你在使用 Mac、Windows 或者 Debian/Ubuntu 机器,请按照 Mac、windows-label 或者 Debian/Ubuntu 的指导来操作.
These instructions were tested on Fedora 28.
这个指导已经在 Fedora 28 中测试过。
Java¶
- Download the RPM installation file of Oracle JDK from https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html.
- Install the package with
rpm -ivh jdk-<version>-linux-<architecture>.rpm
or use the default software manager. - Choose java version by using the following command
alternatives --config java
- Verify that the JDK was installed correctly by running
java -version
- 从 https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 上下载 Oracle JDK 的 RPM 安装文件
- 通过命令
rpm -ivh jdk-<version>-linux-<architecture>.rpm
安装这个包,或者使用默认的软件管理器 - 通过使用下边的命令
alternatives --config java
来选择 Java 版本 - 通过运行
java -version
确认 Java 已经正确地安装
Git¶
- From the terminal, Git can be installed using dnf with the command
sudo dnf install git
- Verify that git was installed correctly by typing
git --version
- 从 terminal 中,通过运行命令
sudo dnf install git
来安装 Git - 通过运行
git --version
来确认 git 已经被正确安装了
IntelliJ¶
- Visit https://www.jetbrains.com/idea/download/download-thanks.html?platform=linux&code=IIC
- Unpack the
tar.gz
file using the following commandtar xfz ideaIC-<version>.tar.gz -C /opt
- Run IntelliJ with
/opt/ideaIC-<version>/bin/idea.sh
- Ensure the Kotlin plugin in IntelliJ is updated to version 1.2.71
- 访问 https://www.jetbrains.com/idea/download/download-thanks.html?platform=linux&code=IIC
- 通过命令
tar xfz ideaIC-<version>.tar.gz -C /opt
解压tar.gz
文件 - 通过
/opt/ideaIC-<version>/bin/idea.sh
运行 IntelliJ - 确保在 IntelliJ 中的 Kotlin plugin 已经更新到版本 1.2.71
接下来的步骤¶
First, run the example CorDapp.
首先,运行 CorDapp 样例。
Next, read through the Corda Key Concepts to understand how Corda works.
接下来,阅读 Corda 核心概念 来理解 Corda 是如何工作的。
By then, you’ll be ready to start writing your own CorDapps. Learn how to do this in the Hello, World tutorial. You may want to refer to the API documentation, the flow cookbook and the samples along the way.
然后,你就已经准备好要开始编写你自己的 CorDapps 了。在 Hello, World tutorial 中学习如何做这些。在这个过程中,你可能想要参考 API 文档,flow cookbook 以及 例子。
If you encounter any issues, please ask on Stack Overflow or via our Slack channels.
如果你遇到任何的困难,请在 Stack Overflow 或者通过 我们的 Slack channels 提问。
运行 CorDapp 例子¶
目录
The example CorDapp allows nodes to agree IOUs with each other, as long as they obey the following contract rules:
- The IOU’s value is strictly positive
- A node is not trying to issue an IOU to itself
这个 CorDapp 的例子允许节点同意彼此间的 IOU,只要他们遵循下边的合约规则:
- IOU 的值是正的
- 一个节点不会尝试给自己生成一个 IOU
We will deploy and run the CorDapp on four test nodes:
- Notary, which runs a notary service
- PartyA
- PartyB
- PartyC
我们将会在下边四个测试节点上部署并运行 CorDapp:
- Notary, 会运行 notary 服务
- PartyA
- PartyB
- PartyC
Because data is only propagated on a need-to-know basis, any IOUs agreed between PartyA and PartyB become “shared facts” between PartyA and PartyB only. PartyC won’t be aware of these IOUs.
由于数据是基于需要知道的原则来传播的,任何在 PartyA 和 PartyB 同意的 IOUs 仅仅会变为 PartyA 和 PartyB 之间的一个 “共享的事实”。PartyC 是不会知道这些 IOUs 的。
下载 CorDapp 样例¶
Start by downloading the example CorDapp from GitHub:
- Set up your machine by following the quickstart guide
- Clone the samples repository from using the following command:
git clone https://github.com/corda/samples
- Change directories to the
cordapp-example
folder:cd samples/cordapp-example
从 GitHub 上下载 CorDapp 样例:
- 跟随 快速开始指南 设置你的电脑
- 用下边的命令
git clone https://github.com/corda/samples
克隆样例代码 - 进入目录
cordapp-example
:cd samples/cordapp-example
在 InteliJ 中打开 CorDapp 样例¶
Let’s open the example CorDapp in IntelliJ IDEA:
- Open IntelliJ
- A splash screen will appear. Click
open
, navigate to and select thecordapp-example
folder, and clickOK
- Once the project is open, click
File
, thenProject Structure
. UnderProject SDK:
, set the project SDK by clickingNew...
, clickingJDK
, and navigating toC:\Program Files\Java\jdk1.8.0_XXX
on Windows orLibrary/Java/JavaVirtualMachines/jdk1.8.XXX
on MacOSX (whereXXX
is the latest minor version number). ClickApply
followed byOK
- Again under
File
thenProject Structure
, selectModules
. Click+
, thenImport Module
, then select thecordapp-example
folder and clickOpen
. Choose toImport module from external model
, selectGradle
, clickNext
thenFinish
(leaving the defaults) andOK
- Gradle will now download all the project dependencies and perform some indexing. This usually takes a minute or so
让我们在 InteliJ IEAD 中打开这个 CorDapp 样例:
- 打开 InteliJ
- 一个 splash 界面会显示。点击
open
,浏览并选择cordapp-example
文件夹,然后点击OK
- 当项目打开后,点击
File
,然后Project Structure
。在Project SDK:
,通过点击New...
来设置项目的 SDK,在 Windows 中选择C:\Program Files\Java\jdk1.8.0_XXX
,或者在 MacOSX 中选择Library/Java/JavaVirtualMachines/jdk1.8.XXX``(``XXX
是最新的小版本号)。点击Apply
- 再一次,点击
File
然后Project Structure
,选择Modules
。点击+
,然后Import Module
,然后选择cordapp-example
文件夹并点击Open
。选择Import module from external model
,选择Gradle
,点击Next
然后Finish``(使用默认)和 ``OK
项目结构¶
The example CorDapp has the following structure:
CorDapp 样例含有以下的结构:
.
├── LICENCE
├── README.md
├── TRADEMARK
├── build.gradle
├── clients
│ ├── build.gradle
│ └── src
│ └── main
│ ├── kotlin
│ │ └── com
│ │ └── example
│ │ └── server
│ │ ├── MainController.kt
│ │ ├── NodeRPCConnection.kt
│ │ └── Server.kt
│ └── resources
│ ├── application.properties
│ └── public
│ ├── index.html
│ └── js
│ └── angular-module.js
├── config
│ ├── dev
│ │ └── log4j2.xml
│ └── test
│ └── log4j2.xml
├── contracts-java
│ ├── build.gradle
│ └── src
│ └── main
│ └── java
│ └── com
│ └── example
│ ├── contract
│ │ └── IOUContract.java
│ ├── schema
│ │ ├── IOUSchema.java
│ │ └── IOUSchemaV1.java
│ └── state
│ └── IOUState.java
├── contracts-kotlin
│ ├── build.gradle
│ └── src
│ └── main
│ └── kotlin
│ └── com
│ └── example
│ ├── contract
│ │ └── IOUContract.kt
│ ├── schema
│ │ └── IOUSchema.kt
│ └── state
│ └── IOUState.kt
├── cordapp-example.iml
├── gradle
│ └── wrapper
│ ├── gradle-wrapper.jar
│ └── gradle-wrapper.properties
├── gradle.properties
├── gradlew
├── gradlew.bat
├── lib
│ ├── README.txt
│ └── quasar.jar
├── settings.gradle
├── workflows-java
│ ├── build.gradle
│ └── src
│ ├── integrationTest
│ │ └── java
│ │ └── com
│ │ └── example
│ │ └── DriverBasedTests.java
│ ├── main
│ │ └── java
│ │ └── com
│ │ └── example
│ │ └── flow
│ │ └── ExampleFlow.java
│ └── test
│ └── java
│ └── com
│ └── example
│ ├── NodeDriver.java
│ ├── contract
│ │ └── IOUContractTests.java
│ └── flow
│ └── IOUFlowTests.java
└── workflows-kotlin
├── build.gradle
└── src
├── integrationTest
│ └── kotlin
│ └── com
│ └── example
│ └── DriverBasedTests.kt
├── main
│ └── kotlin
│ └── com
│ └── example
│ └── flow
│ └── ExampleFlow.kt
└── test
└── kotlin
└── com
└── example
├── NodeDriver.kt
├── contract
│ └── IOUContractTests.kt
└── flow
└── IOUFlowTests.kt
The key files and directories are as follows:
- The root directory contains some gradle files, a README and a LICENSE
- config contains log4j2 configs
- gradle contains the gradle wrapper, which allows the use of Gradle without installing it yourself and worrying about which version is required
- lib contains the Quasar jar which rewrites our CorDapp’s flows to be checkpointable
- clients contains the source code for spring boot integration
- contracts-java and workflows-java contain the source code for the example CorDapp written in Java
- contracts-kotlin and workflows-kotlin contain the same source code, but written in Kotlin. CorDapps can be developed in either Java and Kotlin
下边是关键的文件和路径:
- 根目录 包含了一些 gradle 文件,一个 README 和一个 LICENSE
- config 包含了 log4j2 配置文件
- gradle 包含了 gradle wrapper,这允许你可以直接使用 gradle 而不用自己安装并考虑应该使用哪个版本
- lib 包含了 Quasar jar,它重写了我们的 CorDapp 的 flows 成为 checkpointable
- clients 包含了 sprint boot 集成的源代码
- contract-java 和 workflow-java 包含了使用 Java 编写的 CorDapp 样例的源代码
- contracts-kotlin 和 workflows-kotlin 包含了使用 Kotlin 编写的 CorDapp 样例的源代码。CorDapps 可以用 Java 或者 Kotlin 编写
运行 CorDapp 样例¶
There are two ways to run the example CorDapp:
- Via the terminal
- Via IntelliJ
有两种方式运行 CorDapp 样例:
- 通过 terminal
- 通过 IntelliJ
Both approaches will create a set of test nodes, install the CorDapp on these nodes, and then run the nodes. You can read more about how we generate nodes here.
两种方式都会创建一系列的测试节点,在这些节点上安装 CorDapp,并且运行这些节点。你可以在 这里 阅读更多关于如何生成节点的信息。
从 terminal 运行 CorDapp 样例¶
构建 CorDapp 样例¶
- Open a terminal window in the
cordapp-example
directory - Run the
deployNodes
Gradle task to build four nodes with our CorDapp already installed on them:- Unix/Mac OSX:
./gradlew deployNodes
- Windows:
gradlew.bat deployNodes
- Unix/Mac OSX:
- 在
cordapp-example
路径下打开一个 terminal 窗口 - 运行
deployNodes
gradle 任务来构建四个节点:- Unix/Mac OSX:
./gradlew deployNodes
- Windows:
gradlew.bat deployNodes
- Unix/Mac OSX:
注解
CorDapps can be written in any language targeting the JVM. In our case, we’ve provided the example source in both Kotlin and Java. Since both sets of source files are functionally identical, we will refer to the Kotlin version throughout the documentation.
注解
CorDapps 可以由任何目标是 JVM 的语言编写。在我们的例子中,我们提供了 Java 和 Kotlin 的源代码。因为两套代码具有完全一样的功能,我们会在这个文档中只使用 Kotlin 版本。
After the build finishes, you will see the following output in the
workflows-kotlin/build/nodes
folder:- A folder for each generated node
- A
runnodes
shell script for running all the nodes simultaneously on osX - A
runnodes.bat
batch file for running all the nodes simultaneously on Windows
在 build 结束后,在
workflows-kotlin/build/nodes
文件夹里,你应该能够看到下边的输出:- 为每个节点生成的文件夹
- 一个
runnodes
shell 脚本,用来在 OSX 上同时运行所有的节点 - 一个
runnodes.bat
batch 文件,用来在 Windows 上同时运行所有的节点
Each node in the
nodes
folder will have the following structure:nodes
文件夹下的每个节点将会有以下的结构:. nodeName ├── additional-node-infos // ├── certificates ├── corda.jar // The Corda node runtime ├── cordapps // The node's CorDapps │ ├── corda-finance-contracts-4.1-RC01.jar │ ├── corda-finance-workflows-4.1-RC01.jar │ └── cordapp-example-0.1.jar ├── drivers ├── logs ├── network-parameters ├── node.conf // The node's configuration file ├── nodeInfo-<HASH> // The hash will be different each time you generate a node └── persistence.mv.db // The node's database
注解
deployNodes
is a utility task to create an entirely new set of nodes for testing your CorDapp. In production,
you would instead create a single node as described in 创建本地节点 and build your CorDapp JARs as described
in Building and installing a CorDapp.
注解
deployNodes
是一个 utility 任务来创建一系列全新用来测试你的 CorDapp 的节点。在生产环境中,你可能会像 创建本地节点 描述的那样只生成一个节点,并且像 Building and installing a CorDapp 描述的那样创建你的 CorDapp JARs。
运行 CorDapp 样例¶
Start the nodes by running the following command from the root of the cordapp-example
folder:
- Unix/Mac OSX:
workflows-kotlin/build/nodes/runnodes
- Windows:
call workflows-kotlin\build\nodes\runnodes.bat
从 cordapp-example
文件夹的根目录下运行下边的命令来启动节点:
- Unix/Mac OSX:
workflows-kotlin/build/nodes/runnodes
- Windows:
call workflows-kotlin\build\nodes\runnodes.bat
Each Spring Boot server needs to be started in its own terminal/command prompt, replace X with A, B and C:
- Unix/Mac OSX:
./gradlew runPartyXServer
- Windows:
gradlew.bat runPartyXServer
每个 Spring Boot server 需要在它自己的 terminal/command 窗口中被启动,将 X 替换为 A, B 和 C:
- Unix/Mac OSX:
./gradlew runPartyXServer
- Windows:
gradlew.bat runPartyXServer
Look for the Started ServerKt in X seconds message, don’t rely on the % indicator.
查看 Started ServerKt in X seconds 消息,不要依赖 % 的指示符
警告
On Unix/Mac OSX, do not click/change focus until all seven additional terminal windows have opened, or some nodes may fail to start.
警告
在 Unix/Mac OSX,不要点击/改变焦点知道所有 7 个额外的 terminal 窗口都被打开,或者一些节点可能会启动失败。
For each node, the runnodes
script creates a node tab/window:
对于每个节点,runnodes
语句创建了一个节点 tab/window:
______ __
/ ____/ _________/ /___ _
/ / __ / ___/ __ / __ `/ Top tip: never say "oops", instead
/ /___ /_/ / / / /_/ / /_/ / always say "Ah, Interesting!"
\____/ /_/ \__,_/\__,_/
--- Corda Open Source corda-4.1-RC01 (4157c25) -----------------------------------------------
Logs can be found in : /Users/joeldudley/Desktop/cordapp-example/workflows-kotlin/build/nodes/PartyA/logs
Database connection url is : jdbc:h2:tcp://localhost:59472/node
Incoming connection address : localhost:10005
Listening on port : 10005
Loaded CorDapps : corda-finance-corda-4.1-RC01, cordapp-example-0.1, corda-core-corda-4.1-RC01
Node for "PartyA" started up and registered in 38.59 sec
Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut down the node.
Fri Mar 02 17:34:02 GMT 2018>>>
It usually takes around 60 seconds for the nodes to finish starting up. To ensure that all the nodes are running, you
can query the ‘status’ end-point located at http://localhost:[port]/api/status
(e.g.
http://localhost:50005/api/status
for PartyA
).
通常需要 60 秒钟左右节点能够完成启动。为了确保所有节点是运行的,你可以在 http://localhost:[port]/api/status
查询 ‘status’ end-point(比如对于 PartyA 来说,http://localhost:50005/api/status
)。
在 IntelliJ 中运行 CorDapp 样例¶
Select the
Run Example CorDapp - Kotlin
run configuration from the drop-down menu at the top right-hand side of the IDEClick the green arrow to start the nodes:
To stop the nodes, press the red square button at the top right-hand side of the IDE, next to the run configurations
在 IDE 右上角的下拉菜单中选择
Run Example CorDapp - Kotlin
来运行配置点击绿色的箭头来启动节点:
想要停止节点,点击 IDE 右上角的红色的方块按钮,在运行配置的旁边
跟 CorDapp 样例进行互动¶
通过 HTTP¶
The Spring Boot servers run locally on the following ports:
Spring Boot servers 在下边的端口上运行:
- PartyA:
localhost:50005
- PartyB:
localhost:50006
- PartyC:
localhost:50007
These ports are defined in clients/build.gradle
.
这些端口在 clients/build.gradle
中被定义。
Each Spring Boot server exposes the following endpoints:
每个 Spring Boot server 暴露了一下的 endpoints:
/api/example/me
/api/example/peers
/api/example/ious
/api/example/create-iou
with parametersiouValue
andpartyName
which is CN name of a node
There is also a web front-end served from the home web page e.g. localhost:50005
.
这里也有一个来自于 home web page 的一个 web 前端,比如 localhost:50005
。
警告
The content is only available for demonstration purposes and does not implement anti-XSS, anti-XSRF or other security techniques. Do not use this code in production.
警告
这些内容仅仅是为了演示的目的并且没有实现 anti-XSS、anti-XSRF 或者其他的安全技术。不要把它应用于生产环境。
通过 endpoint 创建一个 IOU¶
An IOU can be created by sending a PUT request to the /api/example/create-iou
endpoint directly, or by using the
the web form served from the home directory.
一个 IOU 可以直接通过发送一个 PUT 请求给 /api/example/create-iou
endpoint 来创建,或者使用来自 home 路径的 web form 来创建。
To create an IOU between PartyA and PartyB, run the following command from the command line:
想要创建一个在 PartyA 和 PartyB 之间的 IOU,在命令行中运行下边的命令:
curl -X PUT 'http://localhost:50005/api/example/create-iou?iouValue=1&partyName=O=PartyB,L=New%20York,C=US'
Note that both PartyA’s port number (50005
) and PartyB are referenced in the PUT request path. This command
instructs PartyA to agree an IOU with PartyB. Once the process is complete, both nodes will have a signed, notarised
copy of the IOU. PartyC will not.
注意 PartyA 的端口号 (50005
) 和 PartyB 的都在 PUT 请求路径中被引用。这个命令告诉 PartyA 去同意一个跟 PartyB 的 IOU。当这个过程结束之后,两个节点都会有一个签过名的,经过公证的 IOU 的副本。PartyC 则不会有。
通过 web 前端提交一个 IOU¶
To create an IOU between PartyA and PartyB, navigate to the home directory for the node, click the “create IOU” button at the top-left of the page, and enter the IOU details into the web-form. The IOU must have a positive value. For example:
想要在 PartyA 和 PartyB 之间创建一个 IOU,浏览节点的 home 路径,点击页面左上角的 “create IOU” 按钮,在 web-form 中输入 IOU 详细信息。IOU 必须是正数。比如:
Counterparty: Select from list
Value (Int): 5
And click submit. Upon clicking submit, the modal dialogue will close, and the nodes will agree the IOU.
点击 Submit。点击 submit 之后,这个模态窗口会被关闭,节点将会同意这个 IOU。
检查 output¶
Assuming all went well, you can view the newly-created IOU by accessing the vault of PartyA or PartyB:
假设一切运行良好,你可以通过访问 PartyA 或者 PartyB 的 vault 来浏览新创建的 IOU:
Via the HTTP API:
- PartyA’s vault: Navigate to http://localhost:50005/api/example/ious
- PartyB’s vault: Navigate to http://localhost:50006/api/example/ious
Via home page:
- PartyA: Navigate to http://localhost:50005 and hit the “refresh” button
- PartyB: Navigate to http://localhost:50006 and hit the “refresh” button
The vault and web front-end of PartyC (at localhost:50007
) will not display any IOUs. This is because PartyC was
not involved in this transaction.
PartyC 的 vault 以及 web 前端(localhost:50007
)不会显示任何的的 IOU。这是因为 PartyC 并没有参与这个交易。
通过 shell (terminal only)¶
Nodes started via the terminal will display an interactive shell:
通过 terminal 启动的节点将会显示一个可交互的 shell:
Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut down the node.
Fri Jul 07 16:36:29 BST 2017>>>
Type flow list
in the shell to see a list of the flows that your node can run. In our case, this will return the
following list:
在 shell 中输入 flow list
将会看到你的节点运行的一个 flows 列表。在我们的例子中,将会返回下边的列表:
com.example.flow.ExampleFlow$Initiator
net.corda.core.flows.ContractUpgradeFlow$Authorise
net.corda.core.flows.ContractUpgradeFlow$Deauthorise
net.corda.core.flows.ContractUpgradeFlow$Initiate
net.corda.finance.flows.CashExitFlow
net.corda.finance.flows.CashIssueAndPaymentFlow
net.corda.finance.flows.CashIssueFlow
net.corda.finance.flows.CashPaymentFlow
net.corda.finance.internal.CashConfigDataFlow
通过 shell 创建一个 IOU¶
We can create a new IOU using the ExampleFlow$Initiator
flow. For example, from the interactive shell of PartyA,
you can agree an IOU of 50 with PartyB by running
flow start ExampleFlow$Initiator iouValue: 50, otherParty: "O=PartyB,L=New York,C=US"
.
我们可以通过使用 ExampleFlow$Initiator
flow 来创建一个新的 IOU。比如,在 PartyA 的 shell 中,你可以通过运行 flow start ExampleFlow$Initiator iouValue: 50, otherParty: "O=PartyB,L=New York,C=US"
来同意跟 PartyB 的一个 50 的 IOU。
This will print out the following progress steps:
这会打印出下边的进度步骤:
✅ Generating transaction based on new IOU.
✅ Verifying contract constraints.
✅ Signing transaction with our private key.
✅ Gathering the counterparty's signature.
✅ Collecting signatures from counterparties.
✅ Verifying collected signatures.
✅ Obtaining notary signature and recording transaction.
✅ Requesting signature by notary service
Requesting signature by Notary service
Validating response from Notary service
✅ Broadcasting transaction to participants
✅ Done
检查输出¶
We can also issue RPC operations to the node via the interactive shell. Type run
to see the full list of available
operations.
我们也可以通过 shell 对节点来初始 RPC 操作。输入 run
来查看可用的操作的全部列表。
You can see the newly-created IOU by running run vaultQuery contractStateType: com.example.state.IOUState
.
你可以通过运行 run vaultQuery contractStateType: com.example.state.IOUState
来查看新创建的 IOU。
As before, the interactive shell of PartyC will not display any IOUs.
像以前一样,PartyC 的 shell 中不会显示任何的 IOUs。
通过 h2 web console¶
You can connect directly to your node’s database to see its stored states, transactions and attachments. To do so, please follow the instructions in 节点数据库.
你也可以直接连接到节点的数据库来查看它存储的 states、transactions 以及附件。可以根据 节点数据库 中的指导做。
在不同的机器上运行节点¶
The nodes can be configured to communicate as a network even when distributed across several machines:
即使在不同的机器上,这些节点也是可以被配置成一个可交互的网络的:
Deploy the nodes as usual:
- Unix/Mac OSX:
./gradlew deployNodes
- Windows:
gradlew.bat deployNodes
- Unix/Mac OSX:
Navigate to the build folder (
workflows-kotlin/build/nodes
)For each node, open its
node.conf
file and changelocalhost
in itsp2pAddress
to the IP address of the machine where the node will be run (e.g.p2pAddress="10.18.0.166:10007"
)These changes require new node-info files to be distributed amongst the nodes. Use the network bootstrapper tool (see Network Bootstrapper) to update the files and have them distributed locally:
java -jar network-bootstrapper.jar workflows-kotlin/build/nodes
Move the node folders to their individual machines (e.g. using a USB key). It is important that none of the nodes - including the notary - end up on more than one machine. Each computer should also have a copy of
runnodes
andrunnodes.bat
.For example, you may end up with the following layout:
- Machine 1:
Notary
,PartyA
,runnodes
,runnodes.bat
- Machine 2:
PartyB
,PartyC
,runnodes
,runnodes.bat
- Machine 1:
After starting each node, the nodes will be able to see one another and agree IOUs among themselves
即使在不同的机器上,这些节点也是可以被配置成一个可交互的网络的:
像常规那样部署节点:
- Unix/Mac OSX:
./gradlew deployNodes
- Windows:
gradlew.bat deployNodes
- Unix/Mac OSX:
浏览至 build 文件夹 (
workflows-kotlin/build/nodes
)对于每个节点,打开它的
node.conf
文件并且在它的p2pAddress
改动localhost
为节点将要运行的机器的 IP 地址 (比如p2pAddress="10.18.0.166:10007"
)这个改动需要将新的 node-info 文件分发到所有节点。使用 bootstrapper 工具 (see Network Bootstrapper) 来更新这些文件并在本地分发他们:
java -jar network-bootstrapper.jar workflows-kotlin/build/nodes
将节点文件夹放到他们自己的机器上 (比如使用 USB)。很关键的一点是,所有这些节点,包括 notary 都不应该存在于多于一台机器上. 每台电脑上都应该有
runnodes
和runnodes.bat
的副本.比如,你可能有下边这样的结构:
- Machine 1:
Notary
,PartyA
,runnodes
,runnodes.bat
- Machine 2:
PartyB
,PartyC
,runnodes
,runnodes.bat
- Machine 1:
在启动每个节点之后,节点就能够看到彼此并在彼此间同意 IOUs 了
警告
The bootstrapper must be run after the node.conf
files have been modified, but before the nodes
are distributed across machines. Otherwise, the nodes will not be able to communicate.
警告
bootstrapper 必须要在 node.conf
修改 之后 并且在节点被分发到不同机器 之前 运行。否则节点是不能够进行通信的。
注解
If you are using H2 and wish to use the same h2port
value for two or more nodes, you must only assign them that
value after the nodes have been moved to their individual machines. The initial bootstrapping process requires access to
the nodes’ databases and if two nodes share the same H2 port, the process will fail.
注解
如果你在使用 H2 并且你想要给两个或多个节点使用相同的 h2port
的话,你必须在节点被放到他们自己的机器之后再设置这个值。这个初始的 bootstrapping 流程是需要访问节点的数据库的,所以如果两个节点共享了相同的 H2 端口的话,这个过程会失败。
测试你的 CorDapp¶
Corda provides several frameworks for writing unit and integration tests for CorDapps.
Corda 提供了不同的框架来为 CorDapps 编写单元和集成测试。
Contract 测试¶
You can run the CorDapp’s contract tests by running the Run Contract Tests - Kotlin
run configuration.
你可以通过运行 Run Contract Tests - Kotlin
运行配置来运行 CorDapp 的 contract 测试。
Flow 测试¶
You can run the CorDapp’s flow tests by running the Run Flow Tests - Kotlin
run configuration.
你可以通过运行 Run Flow Tests - Kotlin
运行配置来运行 CorDapp 的 flow 测试。
集成测试¶
You can run the CorDapp’s integration tests by running the Run Integration Tests - Kotlin
run configuration.
你可以通过运行 Run Integration Tests - Kotlin
运行配置来运行 CorDapp 的集成测试。
在 IntelliJ 中运行测试¶
We recommend editing your IntelliJ preferences so that you use the Gradle runner - this means that the quasar utils
plugin will make sure that some flags (like -javaagent
- see below) are
set for you.
我们建议变更你的 IntelliJ 的首选项,所以你会使用 Gradle runner - 这意味着 quasar utils plugin 将会确保一些 flags(比如 -javaagent
- 查看 下边的)会为你设置好。
To switch to using the Gradle runner:
想要换成使用 Gradle runner:
- Navigate to
Build, Execution, Deployment -> Build Tools -> Gradle -> Runner
(or search for runner)- Windows: this is in “Settings”
- MacOS: this is in “Preferences”
- Set “Delegate IDE build/run actions to gradle” to true
- Set “Run test using:” to “Gradle Test Runner”
If you would prefer to use the built in IntelliJ JUnit test runner, you can add some code to your build.gradle
file and
it will copy your quasar JAR file to the lib directory. You will also need to specify -javaagent:lib/quasar.jar
and set the run directory to the project root directory for each test.
如果你想要使用 IntelliJ 内置的 Junit test runner,你可以向你的 build.gradle
文件中添加一些代码,它会将你的 quasar JAR 文件 copy 到 lib 目录。你还需要指定 -javaagent:lib/quasar.jar
并且设置运行的路径为每个测试的项目的根路径。
Add the following to your build.gradle
file - ideally to a build.gradle
that already contains the quasar-utils plugin line:
将下边的代码添加到你的 build.gradle
文件 - 理想的是到一个 build.gradle
已经包含了 quasar-utils plugin line:
apply plugin: 'net.corda.plugins.quasar-utils'
task installQuasar(type: Copy) {
destinationDir rootProject.file("lib")
from(configurations.quasar) {
rename 'quasar-core(.*).jar', 'quasar.jar'
}
}
and then you can run gradlew installQuasar
.
然后你可以运行 gradlew installQuasar
。
CorDapp 样例¶
There are two distinct sets of samples provided with Corda, one introducing new developers to how to write CorDapps, and more complex worked examples of how solutions to a number of common designs could be implemented in a CorDapp. The former can be found on the Corda website. In particular, new developers should start with the example CorDapp.
Corda 提供了两个系列的样例,一个是向新的开发者介绍如何编写 CorDapps,以及更加复杂的关于如何把一些常用的设计实现在一个 CorDapp 的一些样例。前一种可以在 Corda 网站 中找到。对于新的开发者而言,应该从 CorDapp 样例 开始。
The advanced samples are contained within the samples/ folder of the Corda repository. The most generally useful of these samples are:
- The trader-demo, which shows a delivery-vs-payment atomic swap of commercial paper for cash
- The attachment-demo, which demonstrates uploading attachments to nodes
- The bank-of-corda-demo, which shows a node acting as an issuer of assets (the Bank of Corda) while remote client applications request issuance of some cash on behalf of a node called Big Corporation
更高级的样例是包含在 Corda repository 的 samples/ 文件夹下的。这些样例的一些有帮助的部分包括:
- trader-demo,显示了一个有关商业票据与现金的交付与支付的原子交换
- attachment-demo,演示了如何向节点上传附件
- bank-of-corda-demo,演示了一个座位一个资产发行方(Corda 银行)的节点,远程的客户端应用程序代表一个叫做 Big Corporation 的节点请求发行一些现金
Documentation on running the samples can be found inside the sample directories themselves, in the README.md file.
关于如何运行这些样例的文档在这些样子的目录中,在 README.md 文件里。
注解
If you would like to see flow activity on the nodes type in the node terminal flow watch
.
注解
如果你想看到在节点上的 flow 动作,可以在 terminal 里输入 flow watch
。
Please report any bugs with the samples on GitHub.
请在 GitHub 里反馈任何关于这些样例的 bugs。
编写一个 CorDapp¶
模块¶
The source code for a CorDapp is divided into one or more modules, each of which will be compiled into a separate JAR. Together, these JARs represent a single CorDapp. Typically, a Cordapp contains all the classes required for it to be used standalone. However, some Cordapps are only libraries for other Cordapps and cannot be run standalone.
CorDapp 的源代码被分为一个或者多个模块,其中的每个都会被编译为一个单独的 JAR 文件。这些 JAR 文件在一起便代表了一个 CorDapp。通常,一个 CorDapp 包含的所有所需的类应该被独立地使用。然而,一些 CorDapps 仅仅是被其他 CorDapps 所使用的类库,他们是不能够独立运行的。
A common pattern is to have:
- One module containing only the CorDapp’s contracts and/or states, as well as any required dependencies
- A second module containing the remaining classes that depend on these contracts and/or states
一个常规的模式是具有:
- 一个只含有 CorDapp 的 contracts 和/或者 states,以及他们所需的依赖的模块
- 第二个模块包含依赖于这些 contracts 和/或者 states 的剩余的类
This is because each time a contract is used in a transaction, the entire JAR containing the contract’s definition is attached to the transaction. This is to ensure that the exact same contract and state definitions are used when verifying this transaction at a later date. Because of this, you will want to keep this module, and therefore the resulting JAR file, as small as possible to reduce the size of your transactions and keep your node performant.
这是因为每次一个 contract 在一个 transaction 中被使用的时候,包含这个 contract 定义的整个 JAR 文件就会被附加在 transaction 里。这是为了在之后的某个时间验证 transaction 的时候,确保使用的是完全一致的 contract 和 state 定义。因为这个,你会想要保留着个模块,并且因此产生的这个 JAR 文件,应该尽可能的小,以此来减少你的 transaction 的尺寸,保持节点的效率。
However, this two-module structure is not prescriptive:
- A library CorDapp containing only contracts and states would only need a single module
- In a CorDapp with multiple sets of contracts and states that do not depend on each other, each independent set of contracts and states would go in a separate module to reduce transaction size
- In a CorDapp with multiple sets of contracts and states that do depend on each other, either keep them in the same module or create separate modules that depend on each other
- The module containing the flows and other classes can be structured in any way because it is not attached to transactions
然而,这种两个模块的结构并不是必须的:
- 一个类库类型的 CorDapp 仅仅包含 contracts 和 states 将会只需要一个模块
- 对于一个具有多套的 contracts 和 states 并且彼此 没有 依赖性的 CorDapp,每套独立的 contracts 和 states 都应该放在单独的一个模块,来减小 transaction 的尺寸
- 对于一个具有多套的 contracts 和 states 并且彼此 有 依赖性的 CorDapp,可以把他们保存在相同的模块,或者创建彼此依赖的不同的模块
- 包含 flows 和其他类的模块可以按照任何方式来构建,因为它不会被附加到 transaction 中
CorDapps 模板¶
You should base your project on one of the following templates:
你应该基于 Java 或者 Kotlin 的模板结构来创建你的项目
- Java Template CorDapp (for CorDapps written in Java)
- Kotlin Template CorDapp (for CorDapps written in Kotlin)
Please use the branch of the template that corresponds to the major version of Corda you are using. For example,
someone building a CorDapp on Corda 4.1 should use the release-V4
branch of the template.
请使用你在使用的 Corda 的版本对应的分支的模板。比如,如果是基于 Corda 4.1 构建一个 CorDapp 的话,应该选择 release-V4
分支的模板。
Build system¶
The templates are built using Gradle. A Gradle wrapper is provided in the wrapper
folder, and the dependencies are
defined in the build.gradle
files. See Building and installing a CorDapp for more information.
模板是使用 Gradle 来构建的。一个 Gradle wrapper 在 wrapper
文件夹下被提供,并且这些依赖项是在 build.gradle
文件中定义的。查看 Building and installing a CorDapp 了解更多信息。
No templates are currently provided for Maven or other build systems.
现在没有针对于 Maven 或者其他的 build systems 的模板。
模块¶
The templates are split into two modules:
- A
cordapp-contracts-states
module containing the contracts and states - A
cordapp
module containing the remaining classes that depends on thecordapp-contracts-states
module
模板分为两个模块:
- 一个
cordapp-contracts-states
模块,包含了 contracts 和 states - 一个
cordapp
模块包含了依赖于cordapp-contracts-states
模块的剩余的类
These modules will be compiled into two JARs - a cordapp-contracts-states
JAR and a cordapp
JAR - which
together represent the Template CorDapp.
这些模块会被编译到两个 JARs - 一个 cordapp-contracts-states
JAR 和一个 cordapp
JAR - 他们俩共同代表了这个 CorDapp 模板。
第一个模块 - cordapp-contract-states¶
Here is the structure of the src
directory for the cordapp-contracts-states
module of the Java template:
下边是 Java 模板的 cordapp-contracts-states
模块的 src
路径结构:
.
└── main
└── java
└── com
└── template
├── TemplateContract.java
└── TemplateState.java
The directory only contains two class definitions:
这个路径仅包含两个类定义:
TemplateContract
TemplateState
These are definitions for classes that we expect to have to send over the wire. They will be compiled into their own CorDapp.
这些是我们希望在网络中进行传输的内容的类定义。他们会被编译成为自己的 CorDapp。
第二个模块 - cordapp¶
Here is the structure of the src
directory for the cordapp
module of the Java template:
下边是 Java 模板的 cordapp
模块的 src
路径结构:
.
├── main
│ ├── java
│ │ └── com
│ │ └── template
│ │ ├── TemplateApi.java
│ │ ├── TemplateClient.java
│ │ ├── TemplateFlow.java
│ │ ├── TemplateSerializationWhitelist.java
│ │ └── TemplateWebPlugin.java
│ └── resources
│ ├── META-INF
│ │ └── services
│ │ ├── net.corda.core.serialization.SerializationWhitelist
│ │ └── net.corda.webserver.services.WebServerPluginRegistry
│ ├── certificates
│ └── templateWeb
├── test
│ └── java
│ └── com
│ └── template
│ ├── ContractTests.java
│ ├── FlowTests.java
│ └── NodeDriver.java
└── integrationTest
└── java
└── com
└── template
└── DriverBasedTest.java
The src
directory is structured as follows:
main
contains the source of the CorDapptest
contains example unit tests, as well as a node driver for running the CorDapp from IntelliJintegrationTest
contains an example integration test
src
路径结构包括:
main
包含了 CorDapp 源代码test
包含了单元测试代码,还包括一个能够在 IntelliJ 中运行 CorDapp 的节点 driverintegrationTest
包含了集成测试的例子
Within main
, we have the following directories:
java
, which contains the source-code for our CorDapp:TemplateFlow.java
, which contains a templateFlowLogic
subclassTemplateState.java
, which contains a templateContractState
implementationTemplateContract.java
, which contains a templateContract
implementationTemplateSerializationWhitelist.java
, which contains a templateSerializationWhitelist
implementationTemplateApi.java
, which contains a template API for the deprecated Corda webserverTemplateWebPlugin.java
, which registers the API and front-end for the deprecated Corda webserverTemplateClient.java
, which contains a template RPC client for interacting with our CorDapp
resources/META-INF/services
, which contains various registries:net.corda.core.serialization.SerializationWhitelist
, which registers the CorDapp’s serialisation whitelistsnet.corda.webserver.services.WebServerPluginRegistry
, which registers the CorDapp’s web plugins
resources/templateWeb
, which contains a template front-end
在 main
中, 我们有以下的目录:
java
, 包含了 CorDapp 的源代码:TemplateFlow.java
, 包含了一个FlowLogic
子类TemplateState.java
, 包含了一个ContractState
的实现TemplateContract.java
, 包含了一个Contract
的实现TemplateSerializationWhitelist.java
, 包含了一个SerializationWhitelist
的实现TemplateApi.java
, 包含了一个为了已经废弃的 Corda webserver 的一个模板 APITemplateWebPlugin.java
, 注册了一个为了已经废弃的 Corda web server 的 API 和前端TemplateClient.java
, 包含了一个跟 CorDapp 互动的一个 RPC 客户端
resources/META-INF/services
, 包含了很多不同的注册:net.corda.core.serialization.SerializationWhitelist
, 注册了 CorDapp 的 serialisation 白名单net.corda.webserver.services.WebServerPluginRegistry
, 注册了 CorDapp 的 web plugins
resources/templateWeb
, 包含了一个前端
In a production CorDapp:
- We would remove the files related to the deprecated Corda webserver (
TemplateApi.java
,TemplateWebPlugin.java
,resources/templateWeb
, andnet.corda.webserver.services.WebServerPluginRegistry
) and replace them with a production-ready webserver - We would also move
TemplateClient.java
into a separate module so that it is not included in the CorDapp
在一个生产环境的 CorDapp:
- 我们会移除掉已经被废弃的 Corda webserver 相关的文件(
TemplateApi.java
,TemplateWebPlugin.java
,resources/templateWeb
, 和net.corda.webserver.services.WebServerPluginRegistry
)并且把他们替换成一个适用于生产环境的 webserver - 我们也会把
TemplateClient.java
移动到一个单独的模块,所以它不会被包含在 CorDapp 中
Building and installing a CorDapp¶
目录
CorDapps run on the Corda platform and integrate with it and each other. This article explains how to build CorDapps. To learn what a CorDapp is, please read 什么是 CorDapp?.
CorDapp format¶
A CorDapp is a semi-fat JAR that contains all of the CorDapp’s dependencies except the Corda core libraries and any other CorDapps it depends on.
For example, if a Cordapp depends on corda-core
, your-other-cordapp
and apache-commons
, then the Cordapp
JAR will contain:
- All classes and resources from the
apache-commons
JAR and its dependencies - Nothing from the other two JARs
Build tools¶
In the instructions that follow, we assume you are using Gradle and the cordapp
plugin to build your
CorDapp. You can find examples of building a CorDapp using these tools in the
Kotlin CorDapp Template and the
Java CorDapp Template.
To ensure you are using the correct version of Gradle, you should use the provided Gradle Wrapper by copying across the following folder and files from the Kotlin CorDapp Template or the Java CorDapp Template to the root of your project:
gradle/
gradlew
gradlew.bat
Setting your dependencies¶
Choosing your Corda, Quasar and Kotlin versions¶
Several ext
variables are used in a CorDapp’s build.gradle
file to define version numbers that should match the version of
Corda you’re developing against:
ext.corda_release_version
defines the version of Corda itselfext.corda_gradle_plugins_version
defines the version of the Corda Gradle Pluginsext.quasar_version
defines the version of Quasar, a library that we use to implement the flow frameworkext.kotlin_version
defines the version of Kotlin (if using Kotlin to write your CorDapp)
The current versions used are as follows:
ext.corda_release_version = '4.1-RC01'
ext.corda_gradle_plugins_version = '4.0.42'
ext.quasar_version = '0.7.10'
ext.kotlin_version = '1.2.71'
In certain cases, you may also wish to build against the unstable Master branch. See 使用 non-release 分支构建 CorDapp.
Corda dependencies¶
The cordapp
plugin adds three new gradle configurations:
cordaCompile
, which extendscompile
cordaRuntime
, which extendsruntime
cordapp
, which extendscompile
cordaCompile
and cordaRuntime
indicate dependencies that should not be included in the CorDapp JAR. These
configurations should be used for any Corda dependency (e.g. corda-core
, corda-node
) in order to prevent a
dependency from being included twice (once in the CorDapp JAR and once in the Corda JARs). The cordapp
dependency
is for declaring a compile-time dependency on a “semi-fat” CorDapp JAR in the same way as cordaCompile
, except
that Cordformation
will only deploy CorDapps contained within the cordapp
configuration.
Here are some guidelines for Corda dependencies:
- When building a CorDapp, you should always include
net.corda:corda-core:$corda_release_version
as acordaCompile
dependency, andnet.corda:corda:$corda_release_version
as acordaRuntime
dependency - When building an RPC client that communicates with a node (e.g. a webserver), you should include
net.corda:corda-rpc:$corda_release_version
as acordaCompile
dependency. - When you need to use the network bootstrapper to bootstrap a local network (e.g. when using
Cordformation
), you should includenet.corda:corda-node-api:$corda_release_version
as either acordaRuntime
or aruntimeOnly
dependency. You may also wish to include an implementation of SLF4J as aruntimeOnly
dependency for the network bootstrapper to use. - To use Corda’s test frameworks, add
net.corda:corda-test-utils:$corda_release_version
as atestCompile
dependency. Never includecorda-test-utils
as acompile
orcordaCompile
dependency. - Any other Corda dependencies you need should be included as
cordaCompile
dependencies.
Here is an overview of the various Corda dependencies:
corda
- The Corda fat JAR. Do not use as a compile dependency. Required as acordaRuntime
dependency when usingCordformation
corda-confidential-identities
- A part of the core Corda libraries. Automatically pulled in by other librariescorda-core
- Usually automatically included by another dependency, contains core Corda utilities, model, and functionality. Include manually if the utilities are useful or you are writing a library for Cordacorda-core-deterministic
- Used by the Corda node for deterministic contracts. Not likely to be used externallycorda-djvm
- Used by the Corda node for deterministic contracts. Not likely to be used externallycorda-finance-contracts
,corda-finance-workflows
and deprecatedcorda-finance
. Corda finance CorDapp, use contracts and flows parts respectively. Only include as acordaCompile
dependency if using as a dependent Cordapp or if you need access to the Corda finance types. Use as acordapp
dependency if using as a CorDapp dependency (see below)corda-jackson
- Corda Jackson support. Use if you plan to serialise Corda objects to and/or from JSONcorda-jfx
- JavaFX utilities with some Corda-specific models and utilities. Only use with JavaFX appscorda-mock
- A small library of useful mocks. Use if the classes are useful to youcorda-node
- The Corda node. Do not depend on. Used only by the Corda fat JAR and indirectly in testing frameworks. (If your CorDapp _must_ depend on this for some reason then it should use thecompileOnly
configuration here - but please don’t do this if you can possibly avoid it!)corda-node-api
- The node API. Required to bootstrap a local networkcorda-node-driver
- Testing utility for programmatically starting nodes from JVM languages. Use for testscorda-rpc
- The Corda RPC client library. Used when writing an RPC clientcorda-serialization
- The Corda core serialization library. Automatically included by other dependenciescorda-serialization-deterministic
- The Corda core serialization library. Automatically included by other dependenciescorda-shell
- Used by the Corda node. Never depend on directlycorda-test-common
- A common test library. Automatically included by other test librariescorda-test-utils
- Used when writing tests against Corda/Cordappscorda-tools-explorer
- The Node Explorer tool. Do not depend oncorda-tools-network-bootstrapper
- The Network Builder tool. Useful in build scriptscorda-tools-shell-cli
- The Shell CLI tool. Useful in build scriptscorda-webserver-impl
- The Corda webserver fat JAR. Deprecated. Usually only used by build scriptscorda-websever
- The Corda webserver library. Deprecated. Use a standard webserver library such as Spring instead
Dependencies on other CorDapps¶
Your CorDapp may also depend on classes defined in another CorDapp, such as states, contracts and flows. There are two
ways to add another CorDapp as a dependency in your CorDapp’s build.gradle
file:
cordapp project(":another-cordapp")
(use this if the other CorDapp is defined in a module in the same project)cordapp "net.corda:another-cordapp:1.0"
(use this otherwise)
The cordapp
gradle configuration serves two purposes:
- When using the
cordformation
Gradle plugin, thecordapp
configuration indicates that this JAR should be included on your node as a CorDapp - When using the
cordapp
Gradle plugin, thecordapp
configuration prevents the dependency from being included in the CorDapp JAR
Note that the cordformation
and cordapp
Gradle plugins can be used together.
Other dependencies¶
If your CorDapps have any additional external dependencies, they can be specified like normal Kotlin/Java dependencies
in Gradle. See the example below, specifically the apache-commons
include.
For further information about managing dependencies, see the Gradle docs.
Signing the CorDapp JAR¶
The cordapp
plugin can sign the generated CorDapp JAR file using JAR signing and verification tool.
Signing the CorDapp enables its contract classes to use signature constraints instead of other types of the constraints,
for constraints explanation refer to API: 合约约束.
By default the JAR file is signed by Corda development certificate.
The signing process can be disabled or configured to use an external keystore.
The signing
entry may contain the following parameters:
enabled
the control flag to enable signing process, by default is set totrue
, set tofalse
to disable signingoptions
any relevant parameters of SignJar ANT task, by default the JAR file is signed with Corda development key, the external keystore can be specified, the minimal list of required options is shown below, for other options referer to SignJar task:
keystore
the path to the keystore file, by default cordadevcakeys.jks keystore is shipped with the pluginalias
the alias to sign under, the default value is cordaintermediatecastorepass
the keystore password, the default value is cordacadevpasskeypass
the private key password if it’s different than the password for the keystore, the default value is cordacadevkeypassstoretype
the keystore type, the default value is JKS
The parameters can be also set by system properties passed to Gradle build process.
The system properties should be named as the relevant option name prefixed with ‘signing.’, e.g.
a value for alias
can be taken from the signing.alias
system property. The following system properties can be used:
signing.enabled
, signing.keystore
, signing.alias
, signing.storepass
, signing.keypass
, signing.storetype
.
The resolution order of a configuration value is as follows: the signing process takes a value specified in the signing
entry first,
the empty string “” is also considered as the correct value.
If the option is not set, the relevant system property named signing.option is tried.
If the system property is not set then the value defaults to the configuration of the Corda development certificate.
The example cordapp
plugin with plugin signing
configuration:
cordapp {
signing {
enabled true
options {
keystore "/path/to/jarSignKeystore.p12"
alias "cordapp-signer"
storepass "secret1!"
keypass "secret1!"
storetype "PKCS12"
}
}
//...
CorDapp auto-signing allows to use signature constraints for contracts from the CorDapp without need to create a
keystore and configure the cordapp
plugin. For production deployment ensure to sign the CorDapp using your own
certificate e.g. by setting system properties to point to an external keystore or by disabling signing in cordapp
plugin and signing the CordDapp JAR downstream in your build pipeline. CorDapp signed by Corda development certificate
is accepted by Corda node only when running in the development mode. In case CordDapp signed by the (default)
development key is run on node in the production mode (e.g. for testing), the node may be set to accept the development
key by adding the cordappSignerKeyFingerprintBlacklist = []
property set to empty list (see
Configuring a node).
Signing options can be contextually overwritten by the relevant system properties as described above. This allows the
single build.gradle
file to be used for a development build (defaulting to the Corda development keystore) and for
a production build (using an external keystore). The example system properties setup for the build process which
overrides signing options:
./gradlew -Dsigning.keystore="/path/to/keystore.jks" -Dsigning.alias="alias" -Dsigning.storepass="password" -Dsigning.keypass="password"
Without providing the system properties, the build will sign the CorDapp with the default Corda development keystore:
./gradlew
CorDapp signing can be disabled for a build:
./gradlew -Dsigning.enabled=false
Other system properties can be explicitly assigned to options by calling System.getProperty
in cordapp
plugin
configuration. For example the below configuration sets the specific signing algorithm when a system property is
available otherwise defaults to an empty string:
cordapp {
signing {
options {
sigalg System.getProperty('custom.sigalg','')
}
}
//...
Then the build process can set the value for custom.sigalg system property and other system properties recognized by
cordapp
plugin:
./gradlew -Dcustom.sigalg="SHA256withECDSA" -Dsigning.keystore="/path/to/keystore.jks" -Dsigning.alias="alias" -Dsigning.storepass="password" -Dsigning.keypass="password"
To check if CorDapp is signed use JAR signing and verification tool:
jarsigner --verify path/to/cordapp.jar
Cordformation plugin can also sign CorDapps JARs, when deploying set of nodes, see 创建本地节点.
If your build system post-processes the Cordapp JAR, then the modified JAR content may be out-of-date or not complete
with regards to a signature file. In this case you can sign the Cordapp as a separate step and disable the automatic signing by the cordapp
plugin.
The cordapp
plugin contains a standalone task signJar
which uses the same signing
configuration.
The task has two parameters: inputJars
- to pass JAR files to be signed
and an optional postfix
which is added to the name of signed JARs (it defaults to “-signed”).
The signed JARs are returned as outputJars
property.
For example in order to sign a JAR modified by modifyCordapp task,
create an instance of the net.corda.plugins.SignJar
task (below named as sign).
The output of modifyCordapp task is passed to inputJars and the sign task is run after modifyCordapp one:
task sign(type: net.corda.plugins.SignJar) {
inputJars modifyCordapp
}
modifyCordapp.finalizedBy sign
cordapp {
signing {
enabled false
}
//..
}
The task creates a new JAR file named *-signed.jar which should be used further in your build/publishing process.
Also the best practice is to disable signing by the cordapp
plugin as shown in the example.
Example¶
Below is a sample CorDapp Gradle dependencies block. When building your own CorDapp, use the build.gradle
file of the
Kotlin CorDapp Template or the
Java CorDapp Template as a starting point.
dependencies {
// Corda integration dependencies
cordaCompile "net.corda:corda-core:$corda_release_version"
cordaCompile "net.corda:corda-finance-contracts:$corda_release_version"
cordaCompile "net.corda:corda-finance-workflows:$corda_release_version"
cordaCompile "net.corda:corda-jackson:$corda_release_version"
cordaCompile "net.corda:corda-rpc:$corda_release_version"
cordaCompile "net.corda:corda-node-api:$corda_release_version"
cordaCompile "net.corda:corda-webserver-impl:$corda_release_version"
cordaRuntime "net.corda:corda:$corda_release_version"
cordaRuntime "net.corda:corda-webserver:$corda_release_version"
testCompile "net.corda:corda-test-utils:$corda_release_version"
// Corda Plugins: dependent flows and services
// Identifying a CorDapp by its module in the same project.
cordapp project(":cordapp-contracts-states")
// Identifying a CorDapp by its fully-qualified name.
cordapp "net.corda:bank-of-corda-demo:1.0"
// Some other dependencies
compile "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version"
testCompile "org.jetbrains.kotlin:kotlin-test:$kotlin_version"
testCompile "junit:junit:$junit_version"
compile "org.apache.commons:commons-lang3:3.6"
}
Creating the CorDapp JAR¶
Once your dependencies are set correctly, you can build your CorDapp JAR(s) using the Gradle jar
task
- Unix/Mac OSX:
./gradlew jar
- Windows:
gradlew.bat jar
Each of the project’s modules will be compiled into its own CorDapp JAR. You can find these CorDapp JARs in the build/libs
folders of each of the project’s modules.
警告
The hash of the generated CorDapp JAR is not deterministic, as it depends on variables such as the timestamp at creation. Nodes running the same CorDapp must therefore ensure they are using the exact same CorDapp JAR, and not different versions of the JAR created from identical sources.
The filename of the JAR must include a unique identifier to deduplicate it from other releases of the same CorDapp.
This is typically done by appending the version string to the CorDapp’s name. This unique identifier should not change
once the JAR has been deployed on a node. If it does, make sure no one is relying on FlowContext.appName
in their
flows (see 版本).
Installing the CorDapp JAR¶
注解
Before installing a CorDapp, you must create one or more nodes to install it on. For instructions, please see 创建本地节点.
At start-up, nodes will load any CorDapps present in their cordapps
folder. In order to install a CorDapp on a node, the
CorDapp JAR must be added to the <node_dir>/cordapps/
folder (where node_dir
is the folder in which the node’s JAR
and configuration files are stored) and the node restarted.
CorDapp configuration files¶
CorDapp configuration files should be placed in <node_dir>/cordapps/config
. The name of the file should match the
name of the JAR of the CorDapp (eg; if your CorDapp is called hello-0.1.jar
the config should be config/hello-0.1.conf
).
Config files are currently only available in the Typesafe/Lightbend config format. These files are loaded when a CorDapp context is created and so can change during runtime.
CorDapp configuration can be accessed from CordappContext::config
whenever a CordappContext
is available. For example:
Using CorDapp configuration with the deployNodes task¶
If you want to generate CorDapp configuration when using the deployNodes
Gradle task, then you can use the cordapp
or projectCordapp
properties on the node. For example:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
nodeDefaults {
// this external CorDapp will be included in each project
cordapp("$corda_release_group:corda-finance-contracts:$corda_release_version")
// this external CorDapp will be included in each project with the given config
cordapp("$corda_release_group:corda-finance-workflows:$corda_release_version") {
config "issuableCurrencies = [ USD ]"
}
}
node {
name "O=Bank A,L=London,C=GB"c
...
// This adds configuration for another CorDapp project within the build
cordapp (project(':my-project:workflow-cordapp')) {
config "someStringValue=test"
}
cordapp(project(':my-project:another-cordapp')) {
// Use a multiline string for complex configuration
config '''
someStringValue=test
anotherStringValue=10
'''
}
}
node {
name "O=Bank B,L=New York,C=US"
...
// This adds configuration for the default CorDapp for this project
projectCordapp {
config project.file("src/config.conf")
}
}
}
There is an example project that demonstrates this in the samples
folder of the Corda Git repository, called cordapp-configuration
.
API documentation can be found at api/kotlin/corda/net.corda.core.cordapp/index.html.
Minimum and target platform version¶
CorDapps can advertise their minimum and target platform version. The minimum platform version indicates that a node has to run at least this version in order to be able to run this CorDapp. The target platform version indicates that a CorDapp was tested with this version of the Corda Platform and should be run at this API level if possible. It provides a means of maintaining behavioural compatibility for the cases where the platform’s behaviour has changed. These attributes are specified in the JAR manifest of the CorDapp, for example:
'Min-Platform-Version': 4
'Target-Platform-Version': 4
- Defaults
Target-Platform-Version
(mandatory) is a whole number and must comply with the rules mentioned above.Min-Platform-Version
(optional) will default to 1 if not specified.
Using the cordapp Gradle plugin, this can be achieved by putting this in your CorDapp’s build.gradle:
cordapp {
targetPlatformVersion 4
minimumPlatformVersion 4
}
Separation of CorDapp contracts, flows and services¶
It is recommended that contract code (states, commands, verification logic) be packaged separately from business flows (and associated services). This decoupling enables contracts to evolve independently from the flows and services that use them. Contracts may even be specified and implemented by different providers (eg. Corda currently ships with a cash financial contract which in turn is used in many other flows and many other CorDapps).
As of Corda 4, CorDapps can explicitly differentiate their type by specifying the following attributes in the JAR manifest:
'Cordapp-Contract-Name'
'Cordapp-Contract-Version'
'Cordapp-Contract-Vendor'
'Cordapp-Contract-Licence'
'Cordapp-Workflow-Name'
'Cordapp-Workflow-Version'
'Cordapp-Workflow-Vendor'
'Cordapp-Workflow-Licence'
Defaults
Cordapp-Contract-Name
(optional) if specified, the following Contract related attributes are also used:
Cordapp-Contract-Version
(mandatory), must be a whole number starting from 1.Cordapp-Contract-Vendor
(optional), defaults to UNKNOWN if not specified.Cordapp-Contract-Licence
(optional), defaults to UNKNOWN if not specified.
Cordapp-Workflow-Name
(optional) if specified, the following Workflow related attributes are also used:
Cordapp-Workflow-Version
(mandatory), must be a whole number starting from 1.Cordapp-Workflow-Vendor
(optional), defaults to UNKNOWN if not specified.Cordapp-Workflow-Licence
(optional), defaults to UNKNOWN if not specified.
As with the general CorDapp attributes (minimum and target platform version), these can be specified using the Gradle cordapp plugin as follows:
For a contract only CorDapp we specify the contract tag:
cordapp {
targetPlatformVersion 4
minimumPlatformVersion 3
contract {
name "my contract name"
versionId 1
vendor "my company"
licence "my licence"
}
}
For a CorDapp that contains flows and/or services we specify the workflow tag:
cordapp {
targetPlatformVersion 4
minimumPlatformVersion 3
workflow {
name "my workflow name"
versionId 1
vendor "my company"
licence "my licence"
}
}
注解
It is possible, but not recommended, to include everything in a single CorDapp jar and use both the contract
and workflow
Gradle plugin tags.
警告
Contract states may optionally specify a custom schema mapping (by implementing the Queryable
interface) in its contracts JAR.
However, any associated database schema definition scripts (eg. Liquibase change set XML files) must currently be packaged in the flows JAR.
This is because the node requires access to these schema definitions upon start-up (contract JARs are now loaded in a separate attachments classloader).
This split also caters for scenarios where the same contract CorDapp may wish to target different database providers (and thus, the associated schema DDL may vary
to use native features of a particular database). The finance CorDapp provides an illustration of this packaging convention.
Future versions of Corda will de-couple this custom schema dependency to remove this anomaly.
CorDapp Contract Attachments¶
As of Corda 4, CorDapp Contract JARs must be installed on a node by a trusted uploader, either by
- installing manually as per Installing the CorDapp JAR and re-starting the node.
- uploading the attachment JAR to the node via RPC, either programmatically (see Connecting to a node via RPC) or via the Node shell by issuing the following command:
>>> run uploadAttachment jar: path/to/the/file.jar
Contract attachments that are received from a peer over the p2p network are considered untrusted and will throw a UntrustedAttachmentsException exception when processed by a listening flow that cannot resolve that attachment from its local attachment storage. The flow will be aborted and sent to the nodes flow hospital for recovery and retry. The untrusted attachment JAR will be stored in the nodes local attachment store for review by a node operator. It can be downloaded for viewing using the following CRaSH shell command:
>>> run openAttachment id: <hash of untrusted attachment given by `UntrustedAttachmentsException` exception
Should the node operator deem the attachment trustworthy, they may then issue the following CRaSH shell command to reload it as trusted:
>>> run uploadAttachment jar: path/to/the/trusted-file.jar
and subsequently retry the failed flow (currently this requires a node re-start).
注解
this behaviour is to protect the node from executing contract code that was not vetted. It is a temporary precaution until the Deterministic JVM is integrated into Corda whereby execution takes place in a sandboxed environment which protects the node from malicious code.
使用 non-release 分支构建 CorDapp¶
It is advisable to develop CorDapps against the most recent Corda stable release. However, you may need to build a CorDapp against an unstable non-release branch if your CorDapp uses a very recent feature, or you are using the CorDapp to test a PR on the main codebase.
当开发一个 CorDapp 的时候,我们建议使用最新的 Corda stable release,因为被很好的测试过。但是,如果你的 CorDapp 使用了最新的功能的话,你可能需要基于一个非稳定的 non-release 分支来构建你的 CorDapp,或者你是基于主要的 codebase 使用 CorDapp 来测试一个 PR。
To work against a non-release branch, proceed as follows:
- Clone the Corda repository
- Check out the branch or commit of the Corda repository you want to work against
- Make a note of the
gradlePluginsVersion
in the rootconstants.properties
file of the Corda repository - Clone the Corda Gradle Plugins repository
- Check out the tag of the Corda Gradle Plugins repository corresponding to the
gradlePluginsVersion
- Follow the instructions in the readme of the Corda Gradle Plugins repository to install this version of the Corda Gradle plugins locally
- Open a terminal window in the folder where you cloned the Corda repository
- Publish Corda to your local Maven repository using the following commands:
- Unix/Mac OSX:
./gradlew install
- Windows:
gradlew.bat install
警告
If you do modify your local Corda repository after having published it to Maven local, then you must re-publish it to Maven local for the local installation to reflect the changes you have made.
警告
As the Corda repository evolves on a daily basis, two clones of an unstable branch at different points in time may differ. If you are using an unstable release and need help debugging an error, then please let us know the commit you are working from. This will help us ascertain the issue.
- Make a note of the
corda_release_version
in the rootbuild.gradle
file of the Corda repository - In your CorDapp’s root
build.gradle
file:- Update
ext.corda_release_version
to thecorda_release_version
noted down earlier - Update
corda_gradle_plugins_version
to thegradlePluginsVersion
noted down earlier
- Update
按照下边的步骤使用 non-release 分支:
- 克隆 Corda repository
- Check out 你想要工作的 Corda repository 的分支
- 记录一下在 Corda repository 的
constants.properties
的根gradlePluginsVersion
- 克隆 Corda Gradle Plugins repository
- Check out 相对于
gradlePluginsVersion
的 Corda Gradle Plugins repository 的 tag - 跟随 Corda Gradle Plugins 路径下的 readme 的指导来在本地安装这个版本的 Corda Gradle plugins
- 在你克隆的 Corda 路径下打开一个 terminal 窗口
- 使用下边的命令将 Corda 发布到你的本地 Maven repository 中:
- Unix/Mac OSX:
./gradlew install
- Windows:
gradlew.bat install
警告
如果你在发布到 Maven 本地之后对你本地的 Corda repository 进行了改动的话,那么你必须要重新发布到你的 Maven 本地,这样才能反映出来你的改动。
警告
因为 Corda repository 每天都在更新,不同时间克隆的非稳定版本的分支可能会不一样。如果你在使用 一个非稳定版本并且需要帮助来 debugging 错误的话,请让我们知道你所基于的 commit。这会帮助我们确认问题。
- 记录一下在 Corda repository 的
build.gradle
文件的根corda_release_version
- 在你的 CorDapp 的根
build.gradle
文件:- 更新
ext.corda_release_version
到你之前记录的corda_release_version
- 更新
corda_gradle_plugins_version
到你之前记录的gradlePluginsVersion
- 更新
Debugging 一个 CorDapp¶
There are several ways to debug your CorDapp.
有多种方式来 debug 你的 CorDapp。
使用 MockNetwork
¶
You can attach the IntelliJ IDEA debugger to a
MockNetwork
to debug your CorDapp:
Define your flow tests as per API: 测试
- In your
MockNetwork
, ensure thatthreadPerNode
is set tofalse
- In your
Set your breakpoints
Run the flow tests using the debugger. When the tests hit a breakpoint, execution will pause
你可以把 IntelliJ IDEA debugger 附加到一个 MockNetwork
来 debug 你的 CorDap:
根据 API: 测试 来定义你的 flow tests
- 在你的
MockNetwork
中,确保threadPerNode
设置为false
- 在你的
添加断点
使用 debugger 运行 flow tests。当测试达到一个断点的时候,执行会被暂停
使用 node driver¶
You can also attach the IntelliJ IDEA debugger to nodes running via the node driver to debug your CorDapp.
你可以把 IntelliJ IDEA debugger 通过 node driver 附加到节点上来 debug 你的 CorDapp。
对于正在执行的节点¶
Define a network using the node driver as per Integration testing
- In your
DriverParameters
, ensure thatstartNodesInProcess
is set totrue
- In your
Run the driver using the debugger
Set your breakpoints
Interact with your nodes. When execution hits a breakpoint, execution will pause
- The nodes’ webservers always run in a separate process, and cannot be attached to by the debugger
按照 Integration testing 中使用 node driver 定义一个网络
- 在
DriverParameters
中,确保startNodesInProcess
设置为true
- 在
使用 debugger 运行 driver
设置断点
跟你的节点进行互动。当执行到断点的位置时,执行会被暂停
- 节点的 webservers 总会在一个单独的进程中运行,并且不能够被 debugger 附加
使用远程 debugging¶
Define a network using the node driver as per Integration testing
- In your
DriverParameters
, ensure thatstartNodesInProcess
is set tofalse
andisDebug
is set totrue
- In your
Run the driver. The remote debug ports for each node will be automatically generated and printed to the terminal. For example:
[INFO ] 11:39:55,471 [driver-pool-thread-0] (DriverDSLImpl.kt:814) internal.DriverDSLImpl.startOutOfProcessNode -
Starting out-of-process Node PartyA, debug port is 5008, jolokia monitoring port is not enabled {}
Attach the debugger to the node of interest on its debug port:
- In IntelliJ IDEA, create a new run/debug configuration of type
Remote
- Set the run/debug configuration’s
Port
to the debug port - Start the run/debug configuration in debug mode
- In IntelliJ IDEA, create a new run/debug configuration of type
Set your breakpoints
Interact with your node. When execution hits a breakpoint, execution will pause
- The nodes’ webservers always run in a separate process, and cannot be attached to by the debugger
像 Integration testing 中使用 node driver 定义一个网络
- 在
DriverParameter
中,确保startNodesInProcess
设置为false
并且isDubug
设置为true
- 在
运行 driver。每个节点的远程 debug 端口会自动生成并打印到终端中。像下边这样:
[INFO ] 11:39:55,471 [driver-pool-thread-0] (DriverDSLImpl.kt:814) internal.DriverDSLImpl.startOutOfProcessNode -
Starting out-of-process Node PartyA, debug port is 5008, jolokia monitoring port is not enabled {}
将 debugger 附加到节点的 debug 端口:
- 在 IntelliJ IDEA,创建一个 类型为
Remote
的 run/debug 配置 - 将 run/debug 配置的
Port
设置为 debug 端口 - 在 debug 模式启动 run/debug 配置
- 在 IntelliJ IDEA,创建一个 类型为
添加断点
跟你的节点互动,当执行到断点的时候,执行会被暂停
- 节点的 web servers 会一直在独立的一个进程中运行,不会被 debugger 附带
版本¶
As the Corda platform evolves and new features are added it becomes important to have a versioning system which allows
its users to easily compare versions and know what feature are available to them. Each Corda release uses the standard
semantic versioning scheme of major.minor
. This is useful when making releases in the public domain but is not
friendly for a developer working on the platform. It first has to be parsed and then they have three separate segments on
which to determine API differences. The release version is still useful and every MQ message the node sends attaches it
to the release-version
header property for debugging purposes.
随着 Corda 平台的演进以及更多的功能被添加,拥有一个版本管理系统变得越来越重要,它需要能够允许用户很容易地在不同版本间进行比较。Corda 的每个 Release 都会使用有语义的版本名称,例如 major.minor.patch
。这对于在公共域中发布时很有用的,但对于在这个平台上工作的开发者来说可能不是很友好。首先它需要被解析然后这会被分解成三个部分,以此来确定 API 具有的差异。 Release 版本还是很有用的,为了 debug 的目的,节点所发送的每个 MQ 的消息都会将这个版本信息附加到 release-version
的 header 属性中。
平台版本¶
It is much easier to use a single incrementing integer value to represent the API version of the Corda platform, which is called the platform version. It is similar to Android’s API Level. It starts at 1 and will increment by exactly 1 for each release which changes any of the publicly exposed APIs in the entire platform. This includes public APIs on the node itself, the RPC system, messaging, serialisation, etc. API backwards compatibility will always be maintained, with the use of deprecation to suggest migration away from old APIs. In very rare situations APIs may have to be changed, for example due to security issues. There is no relationship between the platform version and the release version - a change in the major or minor values may or may not increase the platform version. However we do endeavour to keep them synchronised for now, as a convenience.
使用一个自增长的数值来表示 Corda 平台的 API 版本会更容易一些,这被称为 平台版本。这个跟 API Level 类似。它从 1 开始,对于在整个平台中修改任何的公有 APIs 的每个新的 release 会加1。这包括节点自身的共有 APIs,RPC 系统,消息系统,序列化等。API 的向后兼容性一直都是被考虑和维护的,这样就不要从老的 APIs 向新版本迁移。仅仅在很少的情况下 APIs 可能会被移除,比如为了一些安全问题。在平台版本和 release version 之间是没有关系的,对于 major,minor 或者 patch 值的变化可能会也可能不会增加平台的版本。
The platform version is part of the node’s NodeInfo
object, which is available from the ServiceHub
. This enables
a CorDapp to find out which version it’s running on and determine whether a desired feature is available. When a node
registers with the network map it will check its own version against the minimum version requirement for the network.
平台版本是节点的 NodeInfo
对象中的一部分,它可以从 ServiceHub
中获取到。这个允许一个 CorDapp 能够找到它正在运行的是那个版本,并且确定一个指定的功能是不是可用。当一个节点在网络地图服务中注册的时候,它会使用节点的平台版本来强制网络使用一个最低版本的要求。
最小平台版本¶
Applications can advertise a minimum platform version they require. If your app uses new APIs that were added in (for example) Corda 5,
you should specify a minimum version of 5. This will ensure the app won’t be loaded by older nodes. If you can optionally use the new
APIs, you can keep the minimum set to a lower number. Attempting to use new APIs on older nodes can cause NoSuchMethodError
exceptions
and similar problems, so you’d want to check the node version using ServiceHub.myInfo
.
目标版本¶
Applications can also advertise a target version. This is similar to the concept of the same name in Android and iOS. Apps should advertise the highest version of the platform they have been tested against. This allows the node to activate or deactivate backwards compatibility codepaths depending on whether they’re necessary or not, as workarounds for apps designed for earlier versions.
For example, consider an app that uses new features introduced in Corda 4, but which has passed regression testing on Corda 5. It will advertise a minimum platform version of 4 and a target version of 5. These numbers are published in the JAR manifest file.
If this app is loaded into a Corda 6 node, that node may implement backwards compatibility workarounds for your app that make it slower, less secure, or less featureful. You can opt-in to getting the full benefits of the upgrade by changing your target version to 6. By doing this, you promise that you understood all the changes in Corda 6 and have thoroughly tested your app to prove it works. This testing should include ensuring that the app exhibits the correct behaviour on a node running at the new target version, and that the app functions correctly in a network of nodes running at the same target version.
Target versioning is one of the mechanisms we have to keep the platform evolving and improving, without being permanently constrained to being bug-for-bug compatible with old versions. When no apps are loaded that target old versions, any emulations of older bugs or problems can be disabled.
在你的 JAR manifests 中发布版本¶
A well structured CorDapp should be split into two separate modules:
- A contracts jar, that contains your states and contract logic.
- A workflows jar, that contains your flows, services and other support libraries.
The reason for this split is that the contract JAR will be attached to transactions and sent around the network, because this code is what defines the data structures and smart contract logic all nodes will validate. If the rest of your app is a part of that same JAR, it’ll get sent around the network too even though it’s not needed and will never be used. By splitting your app into a contracts JAR and a workflows JAR that depends on the contracts JAR, this problem is avoided.
In the build.gradle
file for your contract module, add a block like this:
cordapp {
targetPlatformVersion 5
minimumPlatformVersion 4
contract {
name "MegaApp Contracts"
vendor "MegaCorp"
licence "MegaLicence"
versionId 1
}
}
This will put the necessary entries into your JAR manifest to set both platform version numbers. If they aren’t specified, both default to 1. Your app can itself has a version number, which should always increment and must also always be an integer.
And in the build.gradle
file for your workflows jar, add a block like this:
cordapp {
targetPlatformVersion 5
minimumPlatformVersion 4
workflow {
name "MegaApp"
vendor "MegaCorp"
licence "MegaLicence"
versionId 1
}
}
It’s entirely expected and reasonable to have an open source contracts module and a proprietary workflow module - the latter may contain sophisticated or proprietary business logic, machine learning models, even user interface code. There’s nothing that restricts it to just being Corda flows or services.
重要
The versionId
specified for the JAR manifest is checked by the platform and is used for informative purposes only.
See “带有签名约束的应用版本” for more information.
注解
You can read the original design doc here: <no title>.
升级 CorDapp¶
注解
This document only concerns the upgrading of CorDapps and not the Corda platform itself (wire format, node database schemas, etc.).
注解
这个文档仅仅关注 CorDapps 的升级而非 Corda 平台自身的升级(wire 格式、节点数据库 schema 等等)。
CorDapp 版本¶
The Corda platform does not mandate a version number on a per-CorDapp basis. Different elements of a CorDapp are allowed to evolve separately. Sometimes, however, a change to one element will require changes to other elements. For example, changing a shared data structure may require flow changes that are not backwards-compatible.
Corda 平台没有要求每个 CorDapp 要保持同样一个版本。CorDapp 的不同元素可以分别去升级。有些时候,当对一个元素进行改动之后,可能会需要其他元素也改动。比如,修改了一个共享的数据结构可能会需要 flow 的改动因为这个是不向后兼容的。
Flow 版本¶
Any flow that initiates other flows must be annotated with the @InitiatingFlow
annotation, which is defined as:
任何初始化其他 flows 的 flow 必须要使用 @InitiatingFlow 注解,像下边这样定义:
annotation class InitiatingFlow(val version: Int = 1)
The version
property, which defaults to 1, specifies the flow’s version. This integer value should be incremented
whenever there is a release of a flow which has changes that are not backwards-compatible. A non-backwards compatible
change is one that changes the interface of the flow.
version
属性默认值为 1,定义了 flow 的版本。当flow 有任何一个新的 release 的时候并且这个 release 包含的变动是非向下兼容的,这个数值应该增加。一个非向下兼容的改动是一个改变了 flow 的接口的变动。
定义一个 flow 的接口¶
The flow interface is defined by the sequence of send
and receive
calls between an InitiatingFlow
and an
InitiatedBy
flow, including the types of the data sent and received. We can picture a flow’s interface as follows:
Flow 的接口是通过在 InitiatingFlow
和 InitiatedBy
flow 之间有序的 send
和 receive
调用来定义的,包括发送和接受的数据的类型。我们可以将 flow 的接口如下图这样表示:

In the diagram above, the InitiatingFlow
:
- Sends an
Int
- Receives a
String
- Sends a
String
- Receives a
CustomType
在上边的图中,InitiatingFlow
:
- 发送了一个
Int
- 接收了一个
String
- 发送了一个
String
- 接收了一个
CustomType
The InitiatedBy
flow does the opposite:
- Receives an
Int
- Sends a
String
- Receives a
String
- Sends a
CustomType
InitiatedBy
flow 恰恰相反:
* 接收了一个 Int
* 发送了一个 String
* 接收了一个 String
* 发送了一个 CustomType
As long as both the InitiatingFlow
and the InitiatedBy
flows conform to the sequence of actions, the flows can
be implemented in any way you see fit (including adding proprietary business logic that is not shared with other
parties).
只要 IntiatingFlow
和 InitiatedBy
flows 遵循这个有序的一系列的动作,那么 flows 就可以按照任何你觉得合适的方式来实现(包括添加不共享给其他节点的业务逻辑)
非向下兼容的 flow 改动¶
A flow can become backwards-incompatible in two main ways:
- The sequence of
send
andreceive
calls changes:- A
send
orreceive
is added or removed from either theInitiatingFlow
orInitiatedBy
flow - The sequence of
send
andreceive
calls changes
- A
- The types of the
send
andreceive
calls changes
Flow 可以有两种主要的方式会变为非向下兼容的:
send
和receive
调用的顺序变化:- 一个
send
或者receive
从InitiatingFlow
或者InitiatedBy
flow 中被添加或者删除了 send
和receive
调用的顺序变了
- 一个
send
和receive
调用的类型变了
运行不兼容版本的 flows 的结果¶
Pairs of InitiatingFlow
flows and InitiatedBy
flows that have incompatible interfaces are likely to exhibit the
following behaviour:
- The flows hang indefinitely and never terminate, usually because a flow expects a response which is never sent from the other side
- One of the flow ends with an exception: “Expected Type X but Received Type Y”, because the
send
orreceive
types are incorrect - One of the flows ends with an exception: “Counterparty flow terminated early on the other side”, because one flow sends some data to another flow, but the latter flow has already ended
带有非兼容接口的 InitiatingFlow
和 InitiatedBy
flows 可能会出现下边的行为:
- flows 会没有明确原因地停住了并且永远也不会终止,通常是因为一个 flow 在等待这着一个回复,但是这个回复永远不会从另一方返回来
- 其中的一个 flow 会带有异常地结束:“Expected Type X but Received Type Y”,因为
send
或者receive
类型不正确 - 其中的一个 flow 会带有异常地结束:“Counterparty flow terminated early on the other side”,因为一个 flow 向另外一个 flow 发送了一些数据,但是后边这个 flow 已经结束了
确保 flow 的向后兼容性¶
The InitiatingFlow
version number is included in the flow session handshake and exposed to both parties via the
FlowLogic.getFlowContext
method. This method takes a Party
and returns a FlowContext
object which describes
the flow running on the other side. In particular, it has a flowVersion
property which can be used to
programmatically evolve flows across versions. For example:
InitiatingFlow
的版本号会被包含在 flow session handshake 中并且会通过 FlowLogic.getFlowContext
方法暴露给双方。这个方法有一个 Party
并且会返回一个 FlowContext
对象,这个对象描述了在另一侧运行的 flow。它有一个 flowVersion
版本,可以用来在不同版本中动态地升级 flows。比如:
@Suspendable
override fun call() {
val otherFlowVersion = otherSession.getCounterpartyFlowInfo().flowVersion
val receivedString = if (otherFlowVersion == 1) {
otherSession.receive<Int>().unwrap { it.toString() }
} else {
otherSession.receive<String>().unwrap { it }
}
}
@Suspendable
@Override public Void call() throws FlowException {
int otherFlowVersion = otherSession.getCounterpartyFlowInfo().getFlowVersion();
String receivedString;
if (otherFlowVersion == 1) {
receivedString = otherSession.receive(Integer.class).unwrap(integer -> {
return integer.toString();
});
} else {
receivedString = otherSession.receive(String.class).unwrap(string -> {
return string;
});
}
return null;
}
This code shows a flow that in its first version expected to receive an Int, but in subsequent versions was modified to expect a String. This flow is still able to communicate with parties that are running the older CorDapp containing the older flow.
上边的代码演示了当 flow 的第一个版本期望收到一个 Int,但是后续的版本变成了期望收到一个 String。这个 flow 在跟其他仍然运行着包含旧的 flow 的旧的 CorDapp 之间还是能够进行沟通的。
处理对于 inlined subflows 的接口变更¶
Here is an example of an in-lined subflow:
下边是一个 in-lined subflow 的例子:
@StartableByRPC
@InitiatingFlow
class FlowA(val recipient: Party) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
subFlow(FlowB(recipient))
}
}
@InitiatedBy(FlowA::class)
class FlowC(val otherSession: FlowSession) : FlowLogic() {
// Omitted.
}
// Note: No annotations. This is used as an inlined subflow.
class FlowB(val recipient: Party) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
val message = "I'm an inlined subflow, so I inherit the @InitiatingFlow's session ID and type."
initiateFlow(recipient).send(message)
}
}
@StartableByRPC
@InitiatingFlow
class FlowA extends FlowLogic<Void> {
private final Party recipient;
public FlowA(Party recipient) {
this.recipient = recipient;
}
@Suspendable
@Override public Void call() throws FlowException {
subFlow(new FlowB(recipient));
return null;
}
}
@InitiatedBy(FlowA.class)
class FlowC extends FlowLogic<Void> {
// Omitted.
}
// Note: No annotations. This is used as an inlined subflow.
class FlowB extends FlowLogic<Void> {
private final Party recipient;
public FlowB(Party recipient) {
this.recipient = recipient;
}
@Suspendable
@Override public Void call() {
String message = "I'm an inlined subflow, so I inherit the @InitiatingFlow's session ID and type.";
initiateFlow(recipient).send(message);
return null;
}
}
Inlined subflows are treated as being the flow that invoked them when initiating a new flow session with a counterparty.
Suppose flow A
calls inlined subflow B, which, in turn, initiates a session with a counterparty. The FlowLogic
type used by the counterparty to determine which counter-flow to invoke is determined by A
, and not by B
. This
means that the response logic for the inlined flow must be implemented explicitly in the InitiatedBy
flow. This can
be done either by calling a matching inlined counter-flow, or by implementing the other side explicitly in the
initiated parent flow. Inlined subflows also inherit the session IDs of their parent flow.
In-lined subflows 是当跟对方初始一个新的 flow session 的时候被调用的 flows。假设 flow A 调用 in-lined subFlow B,B 初始了一个跟对方的会话(session)。对方使用的 FlowLogic
类型决定应该调用哪个对应的 flow 应该是由 A
决定的,而不是 B
。这意味着 in-lined flow 的 response logic 必须要在 InitiateBy
flow 里被显式地实现。这个可以通过调用一个匹配的 in-lined counter-flow,或者在对方的被初始的父的 flow 中显式地实现。In-lined subflows 也会从他们的父 flow 中继承 session IDs。
As such, an interface change to an inlined subflow must be considered a change to the parent flow interfaces.
因此,一个 in-lined subflow 的一个借口的改动必须要考虑对父 flow 接口也要有一个改动。
An example of an inlined subflow is CollectSignaturesFlow
. It has a response flow called SignTransactionFlow
that isn’t annotated with InitiatedBy
. This is because both of these flows are inlined. How these flows speak to
one another is defined by the parent flows that call CollectSignaturesFlow
and SignTransactionFlow
.
一个 in-lined subflow 的例子是 CollectSignaturesFlow
。他有一个没有 InitiateBy
注解的 response 的叫 SignTransactionFlow
的 flow。这是因为这两个 flows 都是 in-lined。这两个 flows 是如何彼此交流的是通过调用成为 CollectSignaturesFlow
和 SignTransactionFlow
他们的父 flows 来定义的。
In code, inlined subflows appear as regular FlowLogic
instances without either an InitiatingFlow
or an
InitiatedBy
annotation.
在代码中,in-lined subflows 看起来就是一个常规的 FlowLogic
的实例,但是没有 InitiatingFlow
或者 InitiatedBy
注解。
Inlined flows are not versioned, as they inherit the version of their parent InitiatingFlow
or InitiatedBy
flow.
In-lined subflows 是没有版本的,因为他们的版本是继承于他们的父 InitiatingFlow
和 InitiatedBy
flow。
Flows which are not an InitiatingFlow
or InitiatedBy
flow, or inlined subflows that are not called from an
InitiatingFlow
or InitiatedBy
flow, can be updated without consideration of backwards-compatibility. Flows of
this type include utility flows for querying the vault and flows for reaching out to external systems.
不是 InitiatingFlow
或者 InitiatedBy
flow,也不是由一个 InitiatingFlow
或者 InitiatedBy
flow 调用的 in-lined subflows ,更新的时候可以不考虑向下兼容的问题。这种类型的 flows 包括用来查询 vault 的 utility flows,或者对外部系统进行查询的 flows。
进行 flow 升级¶
- Update the flow and test the changes. Increment the flow version number in the
InitiatingFlow
annotation - Ensure that all versions of the existing flow have finished running and there are no pending
SchedulableFlows
on any of the nodes on the business network. This can be done by Flow 排空 - Shut down the node
- Replace the existing CorDapp JAR with the CorDapp JAR containing the new flow
- Start the node
- 更新 flow 并测试这些变化。在
InitiatingFlow
注解中递增 flow 版本号 - 确保所有版本的已经存在的 flow 都已经运行完毕,在这个业务网络中的任何节点上没有未结束的
SchedulableFlows
。这个可以通过 Flow 排空 来实现。 - 关闭节点
- 将已经存在的 CorDapp JAR 替换为包含新的 flow 的 CorDapp JAR
- 启动节点
If you shut down all nodes and upgrade them all at the same time, any incompatible change can be made.
如果你关闭了所有的节点并在同一时间升级他们的话,任何的非兼容的改动都可以。
In situations where some nodes may still be using previous versions of a flow and thus new versions of your flow may talk to old versions, the updated flows need to be backwards-compatible. This will be the case for almost any real deployment in which you cannot easily coordinate the roll-out of new code across the network.
当一些节点还需要使用一个 flow 之前的版本的时候,那么你的新版本的 flow 就需要跟旧的版本的 flow 进行对话,这个升级后的 flow 就需要有向后兼容性。这可能是最有可能的真是的部署场景,你可能很难在整个网络中发布一个新的代码。
Flow 排空¶
A flow checkpoint is a serialised snapshot of the flow’s stack frames and any objects reachable from the stack. Checkpoints are saved to the database automatically when a flow suspends or resumes, which typically happens when sending or receiving messages. A flow may be replayed from the last checkpoint if the node restarts. Automatic checkpointing is an unusual feature of Corda and significantly helps developers write reliable code that can survive node restarts and crashes. It also assists with scaling up, as flows that are waiting for a response can be flushed from memory.
一个 flow *检查点*是一个序列化的 flow 的堆栈结构(stack frames) 和 任何可以从堆栈中拿到的对象的 snapshot。检查点会在一个 flow 挂起后者恢复的时候被自动存到数据中,这个通常会在发送或者接收消息的时候发生。当节点重启的时候,一个 flow 可能会从最后一个检查点开始重新运行。自动的创建检查点是 Corda 提供的 一个非常规的功能,这会很大地帮助开发者编写可靠的代码来确保当节点重启或者 crash 之后节点还能够继续正常运行。这个也帮助了向上扩展(scaling up),因为当 flows 在等待一个 response 的时候,他们会被从内存中清理掉。
However, this means that restoring an old checkpoint to a new version of a flow may cause resume failures. For example if you remove a local variable from a method that previously had one, then the flow engine won’t be able to figure out where to put the stored value of the variable.
然而,这也意味着将 flow 从一个旧版本恢复到一个新的版本的时候,可能会造成重启失败。比如如果你从一个方法中删除了一个本地变量,这个变量在以前的版本中是有的,那么 flow 引擎是无法找出之前存储的变量值应该放在哪里的。
For this reason, in currently released versions of Corda you must drain the node before doing an app upgrade that
changes @Suspendable
code. A drain blocks new flows from starting but allows existing flows to finish. Thus once
a drain is complete there should be no outstanding checkpoints or running flows. Upgrading the app will then succeed.
因此,在当前版本的 Corda 中,在做一个改变了 `@Suspendable
代码更新的一个应用升级之前,你必须要 排空节点。排空操作会组织开始一个的 flows,但是仍旧允许完成已经存在的 flows。因此当一次排空操作完成的时候,就不应该有任何特别的检查点或者是正在运行的 flows 了。这样升级应用才会成功。
A node can be drained or undrained via RPC using the setFlowsDrainingModeEnabled
method, and via the shell using
the standard run
command to invoke the RPC. See Node shell to learn more.
一个节点可以使用 setFlowsDrainingModeEnabled
方法来决定要排空还是不要排空,这个可以通过 shell ,使用标准的 run
命令来调用 RPC 来实现。
Contract 和 state 版本¶
There are two types of contract/state upgrade:
- Implicit: By allowing multiple implementations of the contract ahead of time, using constraints. See API: 合约约束 to learn more
- Explicit: By creating a special contract upgrade transaction and getting all participants of a state to sign it using the contract upgrade flows
这里有两种类型的 contract/state 升级:
1. 隐式的升级:使用约束(constraints)允许提前对于 contract 开发多种实现。查看 API: 合约约束 了解更多 1. 显式的升级:创建一个特殊的 更新合约的 transaction 然后使用升级合约 flows 来获得 state 的所有参与者的签名
The general recommendation for Corda 4 is to use implicit upgrades for the reasons described here.
对于 Corda 4 推荐的方式是使用 隐式升级,在 here 里描述了原因。
进行显式的 contract 和 state 升级¶
In an explicit upgrade, contracts and states can be changed in arbitrary ways, if and only if all of the state’s participants agree to the proposed upgrade. To ensure the continuity of the chain the upgraded contract needs to declare the contract and constraint of the states it’s allowed to replace.
在显式的升级中,contracts 和 state 可以按照任何的方式来变化,这些变化仅仅在 state 的所有参与者对这个升级都同意的条件下才会生效。为了确保链的连续性,升级的 contract 需要声明它允许替换的 states 的 contract 和约束。
警告
In Corda 4 we’ve introduced the Signature Constraint (see API: 合约约束). States created or migrated to the Signature Constraint can’t be explicitly upgraded using the Contract upgrade transaction. This feature might be added in a future version. Given the nature of the Signature constraint there should be little need to create a brand new contract to fix issues in the old contract.
警告
在 Corda 4,我们引入了签名约束(查看 API: 合约约束)。新建的或者迁移到签名约束的 states 不能使用 contract 升级 transaction 进行显式的升级。这个功能可能会在将来的版本中添加。基于签名约束的本质特点,这可能有很小的需求来创建一个全新的 contract 来解决在旧的 contract 中的问题。
1. 保留已经存在的 state 和 contract 的定义¶
Currently, all nodes must permanently keep all old state and contract definitions on their node’s classpath if the explicit upgrade process was used on them.
当前,如果使用显式的升级,所有节点必须要在他们节点的 classpath 上 永久地 保存 所有 就的 state 和 contract 的定义。
注解
This requirement will go away in a future version of Corda. In Corda 4, the contract-code-as-attachment feature was implemented
only for “normal” transactions. Contract Upgrade
and Notary Change
transactions will still be executed within the node classpath.
注解
这个需要会在将来版本的 Corda 中去除。在 Corda 4 中,contract-code-as-attachment 的功能仅仅对于 “常规” 的 transaction 实现了。Contract Upgrade
和 Notary Change
还是会在节点的 classpath 中被执行。
2. 编写新的 state 和 contract 定义¶
Update the contract and state definitions. There are no restrictions on how states are updated. However,
upgraded contracts must implement the UpgradedContract
interface. This interface is defined as:
更新 contract 和/或 state 定义。对于如何更新 states,并没有任何的限制。但是更新 contracts 必须要实现 UpgradedContract
接口。接口定义如下
interface UpgradedContract<in OldState : ContractState, out NewState : ContractState> : Contract {
val legacyContract: ContractClassName
fun upgrade(state: OldState): NewState
}
The upgrade
method describes how the old state type is upgraded to the new state type.
upgrade
方法描述了旧的 state 类型是如何更新成新的 state 类型的。
By default this new contract will only be able to upgrade legacy states which are constrained by the zone whitelist (see API: 合约约束).
新的 contract 默认只能够更新在白名单中的已有的 states(查看 API: 合约约束)。
注解
The requirement for a legacyContractConstraint
arises from the fact that when a transaction chain is verified and a Contract Upgrade
is
encountered on the back chain, the verifier wants to know that a legitimate state was transformed into the new contract. The legacyContractConstraint
is
the mechanism by which this is enforced. Using it, the new contract is able to narrow down what constraint the states it is upgrading should have.
If a malicious party would create a fake com.megacorp.MegaToken
state, he would not be able to use the usual MegaToken
code as his
fake token will not validate because the constraints will not match. The com.megacorp.SuperMegaToken
would know that it is a fake state and thus refuse to upgrade it.
It is safe to omit the legacyContractConstraint
for the zone whitelist constraint, because the chain of trust is ensured by the Zone operator
who would have whitelisted both contracts and checked them.
注解
当一个 transaction 链被验证并且在之前的 chain 上遇到了一个 Contract Upgrade
的时候,验证着想要知道一个正确的 state 被转换成为一个新的 contract,由于这样一个事实,对于一个 legacyContractConstraint
的需求就被提了出来。legacyContractConstraint
是一种强制执行这个的机制。使用它,新的 contract 能够知道这个 state 的升级使用的约束是什么。如果一个恶意节点创建了一个虚假的 com.megacorp.MegaToken
state,它应该不能够使用常规的 MegaToken
代码,因为它的虚假的 token 由于不满足约束而不是正确的。com.megacorp.SuperMegaToken
将会知道它是一个虚假的 state 因此就会拒绝更新它。这个对于 zone 白名单来说可以安全的省略 legacyContractConstraint
,因为具有 contracts 白名单并且会验证他们的 zone 维护者会确保 trust 链。
If the hash constraint is used, the new contract should implement UpgradedContractWithLegacyConstraint
instead, and specify the constraint explicitly:
如果使用了 hash 约束类型话,新的 contract 必须要实现 UpgradedContractWithLegacyConstraint
,并且需要显式地指明是哪种约束:
interface UpgradedContractWithLegacyConstraint<in OldState : ContractState, out NewState : ContractState> : UpgradedContract<OldState, NewState> {
val legacyContractConstraint: AttachmentConstraint
}
For example, in case of hash constraints the hash of the legacy JAR file should be provided:
比如,如果是 hash 约束的话,那么原始的 JAR 文件的 hash 需要被提供:
override val legacyContractConstraint: AttachmentConstraint
get() = HashAttachmentConstraint(SecureHash.parse("E02BD2B9B010BBCE49C0D7C35BECEF2C79BEB2EE80D902B54CC9231418A4FA0C"))
3. 创建新的 CorDapp JAR¶
Produce a new CorDapp JAR file. This JAR file should only contain the new contract and state definitions.
生成一个新的 CorDapp JAR 文件。这个 JAR 文件应该只包含新的 contract 和 state 定义。
4. 分发新的 CorDapp JAR¶
Place the new CorDapp JAR file in the cordapps
folder of all the relevant nodes. You can do this while the nodes are still
running.
将新的 CorDapp JAR 文件放在所有相关节点的 cordapps
文件夹下。你可以在节点还在运行的情况下做这些。
5. 关闭节点¶
Have each node operator stop their node. If you are also changing flow definitions, you should perform a node drain first to avoid the definition of states or contracts changing whilst a flow is in progress.
让每个节点维护者停止他们的节点。如果你也改变了 flow 定义的话,你需要首先执行 排空节点 ,来避免在一个 flow 仍在运行的过程中来引入新定义的 states 和 contracts。
6. 重新运行网络 bootstrapper (仅仅在你想要把新的 contract 加到白名单)¶
If you’re using the network bootstrapper instead of a network map server and have defined any new contracts, you need to re-run the network bootstrapper to whitelist the new contracts. See Network Bootstrapper.
如果正在使用 network bootstrapper 而不是一个 network map server 并且定义了新的 contracts 的话,你需要重新运行 network bootstrapper 来将新的 contract 添加到白名单里。查看 Network Bootstrapper。
8. 升级授权¶
Now that new states and contracts are on the classpath for all the relevant nodes, the nodes must all run the
ContractUpgradeFlow.Authorise
flow. This flow takes a StateAndRef
of the state to update as well as a reference
to the new contract, which must implement the UpgradedContract
interface.
如果新的 states 和 contracts 已经被放到了所有节点的 classpath 下之后,下一步就是每个节点去运行 ContractUpgradeFlow.Authorise
flow。这个 flow 会带有一个需要更新的 StateAndRef
的 state,还有一个对新的 contract 的引用,这个 contract 必须要实现 UpgradedContract
接口。
At any point, a node administrator may de-authorise a contract upgrade by running the
ContractUpgradeFlow.Deauthorise
flow.
在任何时间,节点的管理员都可以通过运行 ContractUpgradeFlow.Deauthorise
flow 来不通过一个 contract 的升级。
9. 执行升级¶
Once all nodes have performed the authorisation process, a single node must initiate the upgrade via the
ContractUpgradeFlow.Initiate
flow for each state object. This flow has the following signature:
当所有的节点都执行完了授权流程后,必须要选择 一个 参与节点通过 ContractUpgradeFlow.Initiate
flow 来初始对每个 state 对象的更新。这个 flow 有这样的特点:
class Initiate<OldState : ContractState, out NewState : ContractState>(
originalState: StateAndRef<OldState>,
newContractClass: Class<out UpgradedContract<OldState, NewState>>
) : AbstractStateReplacementFlow.Instigator<OldState, NewState, Class<out UpgradedContract<OldState, NewState>>>(originalState, newContractClass)
This flow sub-classes AbstractStateReplacementFlow
, which can be used to upgrade state objects that do not need a
contract upgrade.
这个 flow 是 AbstractStateReplacementFlow
的子类(sub-class),它也可以用来对不需要更新 contract 的 state 对象进行更新。
One the flow ends successfully, all the participants of the old state object should have the upgraded state object which references the new contract code.
当 flow 成功结束后,所有参与节点的旧的 state 对象应该被更新为升级过的 state 对象了,他们也会指向新的 contract code。
需要注意的点¶
Contract 更新 flows 的能力¶
- Despite its name, the
ContractUpgradeFlow
handles the update of both state object definitions and contract logic - The state can completely change as part of an upgrade! For example, it is possible to transmute a
Cat
state into aDog
state, provided that all participants in theCat
state agree to the change - If a node has not yet run the contract upgrade authorisation flow, they will not be able to upgrade the contract and/or state objects
- State schema changes are handled separately
- 不需要管它的名字,
ContractUpgradeFlow
同样可以处理 state 对象和 contract 逻辑定义的更新 - 在一次升级中,State 可以彻底的发生改变。比如可以从一个
猫
state 变成狗
state,只需要确保所有 猫 state 的参与者都同意这个变化 - 如果一个节点没有运行 contract 升级授权 flow 的话,他们将不会更新 contract 和/或 state 对象的更新
- State schema 改动需要单独处理
过程¶
- All nodes need to run the contract upgrade authorisation flow to upgrade the contract and/or state objects
- Only node administrators are able to run the contract upgrade authorisation and deauthorisation flows
- Upgrade authorisations can subsequently be deauthorised
- Only one node should run the contract upgrade initiation flow. If multiple nodes run it for the same
StateRef
, a double-spend will occur for all but the first completed upgrade - Upgrades do not have to happen immediately. For a period, the two parties can use the old states and contracts side-by-side
- The supplied upgrade flows upgrade one state object at a time
- 所有节点需要运行 contract 升级授权 flow 来升级 contract 和/或者 state 对象
- 只有节点管理者能够运行 contract 升级授权和结束授权 flows
- 升级授权可以在后续被停止授权
- 只有一个节点应该运行初始 contract 升级 flow。如果多个节点对相同的
StateAndRef
运行了初始化 flow,一个“双花”问题会在双方产生,最先完成的会生效 - 升级不需要马上执行。在一段时期内,双方还是可以继续使用旧的 state 和 contracts
- 这里提供的升级 flows 每次只会升级一个 state 对象
State schema 版本¶
By default, all state objects are serialised to the database as a string of bytes and referenced by their StateRef
.
However, it is also possible to define custom schemas for serialising particular properties or combinations of
properties, so that they can be queried from a source other than the Corda Vault. This is done by implementing the
QueryableState
interface and creating a custom object relational mapper for the state. See API: 持久化
for details.
默认的,所有的 state 对象都会以被序列化为字节格式的字符串而存到数据库中,并且会被他们的 StateRef
引用。然而对某些特定的属性或者一些属性的集合的序列化也是可以定义自定义的 schemas 的,所以他们就可以从一个数据源被检索而不是直接检索 Corda Vault。这个是通过实现 QueryableState
接口并且对这个 state 创建一个自定义的 ORM(Object Relational Mapper) 来实现的。查看 API: 持久化 了解更详细的信息。
For backwards compatible changes such as adding columns, the procedure for upgrading a state schema is to extend the existing object relational mapper. For example, we can update:
针对于向后兼容性,像添加新的 columns 这样的改动,升级一个 state schema 的过程其实是对已经存在的 ORM 进行扩展。比如,我们可以将下边的 schema:
object ObligationSchemaV1 : MappedSchema(Obligation::class.java, 1, listOf(ObligationEntity::class.java)) {
@Entity @Table(name = "obligations")
class ObligationEntity(obligation: Obligation) : PersistentState() {
@Column var currency: String = obligation.amount.token.toString()
@Column var amount: Long = obligation.amount.quantity
@Column @Lob var lender: ByteArray = obligation.lender.owningKey.encoded
@Column @Lob var borrower: ByteArray = obligation.borrower.owningKey.encoded
@Column var linear_id: String = obligation.linearId.id.toString()
}
}
public class ObligationSchemaV1 extends MappedSchema {
public ObligationSchemaV1() {
super(Obligation.class, 1, ImmutableList.of(ObligationEntity.class));
}
}
@Entity
@Table(name = "obligations")
public class ObligationEntity extends PersistentState {
@Column(name = "currency") private String currency;
@Column(name = "amount") private Long amount;
@Column(name = "lender") @Lob private byte[] lender;
@Column(name = "borrower") @Lob private byte[] borrower;
@Column(name = "linear_id") private UUID linearId;
protected ObligationEntity(){}
public ObligationEntity(String currency, Long amount, byte[] lender, byte[] borrower, UUID linearId) {
this.currency = currency;
this.amount = amount;
this.lender = lender;
this.borrower = borrower;
this.linearId = linearId;
}
public String getCurrency() {
return currency;
}
public Long getAmount() {
return amount;
}
public byte[] getLender() {
return lender;
}
public byte[] getBorrower() {
return borrower;
}
public UUID getLinearId() {
return linearId;
}
}
变为:
object ObligationSchemaV1 : MappedSchema(Obligation::class.java, 1, listOf(ObligationEntity::class.java)) {
@Entity @Table(name = "obligations")
class ObligationEntity(obligation: Obligation) : PersistentState() {
@Column var currency: String = obligation.amount.token.toString()
@Column var amount: Long = obligation.amount.quantity
@Column @Lob var lender: ByteArray = obligation.lender.owningKey.encoded
@Column @Lob var borrower: ByteArray = obligation.borrower.owningKey.encoded
@Column var linear_id: String = obligation.linearId.id.toString()
@Column var defaulted: Bool = obligation.amount.inDefault // NEW COLUMN!
}
}
public class ObligationSchemaV1 extends MappedSchema {
public ObligationSchemaV1() {
super(Obligation.class, 1, ImmutableList.of(ObligationEntity.class));
}
}
@Entity
@Table(name = "obligations")
public class ObligationEntity extends PersistentState {
@Column(name = "currency") private String currency;
@Column(name = "amount") private Long amount;
@Column(name = "lender") @Lob private byte[] lender;
@Column(name = "borrower") @Lob private byte[] borrower;
@Column(name = "linear_id") private UUID linearId;
@Column(name = "defaulted") private Boolean defaulted; // NEW COLUMN!
protected ObligationEntity(){}
public ObligationEntity(String currency, Long amount, byte[] lender, byte[] borrower, UUID linearId, Boolean defaulted) {
this.currency = currency;
this.amount = amount;
this.lender = lender;
this.borrower = borrower;
this.linearId = linearId;
this.defaulted = defaulted;
}
public String getCurrency() {
return currency;
}
public Long getAmount() {
return amount;
}
public byte[] getLender() {
return lender;
}
public byte[] getBorrower() {
return borrower;
}
public UUID getLinearId() {
return linearId;
}
public Boolean isDefaulted() {
return defaulted;
}
}
Thus adding a new column with a default value.
因此当添加一个新的 column 的时候,给它一个默认值。
To make a non-backwards compatible change, the ContractUpgradeFlow
or AbstractStateReplacementFlow
must be
used, as changes to the state are required. To make a backwards-incompatible change such as deleting a column (e.g.
because a property was removed from a state object), the procedure is to define another object relational mapper, then
add it to the supportedSchemas
property of your QueryableState
, like so:
对于一个非向后兼容的改动,那么必须要使用 ContractUpgradeFlow
或者 AbstractStateReplacementFlow
,因为必须要对 state 也要做改动。对于一个非向后兼容的改动,比如删除了一个 column(比如因为某个属性需要从 state 对象中被删除),更新的过程应该是定义另外一个 ORM,然后将它添加到你的 QueryableState
的 supportedSchemas
属性中,像下边这样:
override fun supportedSchemas(): Iterable<MappedSchema> = listOf(ExampleSchemaV1, ExampleSchemaV2)
@Override public Iterable<MappedSchema> supportedSchemas() {
return ImmutableList.of(new ExampleSchemaV1(), new ExampleSchemaV2());
}
Then, in generateMappedObject
, add support for the new schema:
然后在 generateMappedObject
中添加对新的 schema 的支持:
override fun generateMappedObject(schema: MappedSchema): PersistentState {
return when (schema) {
is DummyLinearStateSchemaV1 -> // Omitted.
is DummyLinearStateSchemaV2 -> // Omitted.
else -> throw IllegalArgumentException("Unrecognised schema $schema")
}
}
@Override public PersistentState generateMappedObject(MappedSchema schema) {
if (schema instanceof DummyLinearStateSchemaV1) {
// Omitted.
} else if (schema instanceof DummyLinearStateSchemaV2) {
// Omitted.
} else {
throw new IllegalArgumentException("Unrecognised schema $schema");
}
}
With this approach, whenever the state object is stored in the vault, a representation of it will be stored in two separate database tables where possible - one for each supported schema.
通过这种方式,当 state 对象被存储到 vault 中的时候,它的代表(representation)会被分别存储到两个数据库表中,每个代表着一个支持的 schema。
序列化¶
Corda 序列化格式¶
Currently, the serialisation format for everything except flow checkpoints (which uses a Kryo-based format) is based on AMQP 1.0, a self-describing and controllable serialisation format. AMQP is desirable because it allows us to have a schema describing what has been serialized alongside the data itself. This assists with versioning and deserialising long-ago archived data, among other things.
当前,所有的序列化格式除了 flow checkpoints(使用 Kryo-based 格式) 以外都是基于 AMQP 1.0,一个自描述(self-describing)和可控的序列化格式。AMQP 是正确的选择因为除了被序列化的数据本身,它允许我们可以定义一个 schema 来描述什么被序列化了。这个协助了版本以及反序列化很久以前 archive 的数据,和其他的事情。
编写满足序列化格式需求的类¶
Although not strictly related to versioning, AMQP serialisation dictates that we must write our classes in a particular way:
- Your class must have a constructor that takes all the properties that you wish to record in the serialized form. This is required in order for the serialization framework to reconstruct an instance of your class
- If more than one constructor is provided, the serialization framework needs to know which one to use. The
@ConstructorForDeserialization
annotation can be used to indicate the chosen constructor. For a Kotlin class without the@ConstructorForDeserialization
annotation, the primary constructor is selected - The class must be compiled with parameter names in the .class file. This is the default in Kotlin but must be turned
on in Java (using the
-parameters
command line option tojavac
) - Your class must provide a Java Bean getter for each of the properties in the constructor, with a matching name. For
example, if a class has the constructor parameter
foo
, there must be a getter calledgetFoo()
. Iffoo
is a boolean, the getter may optionally be calledisFoo()
. This is why the class must be compiled with parameter names turned on - The class must be annotated with
@CordaSerializable
- The declared types of constructor arguments/getters must be supported, and where generics are used the generic parameter must be a supported type, an open wildcard (*), or a bounded wildcard which is currently widened to an open wildcard
- Any superclass must adhere to the same rules, but can be abstract
- Object graph cycles are not supported, so an object cannot refer to itself, directly or indirectly
虽然并不是跟版本有着很严格的联系,AMQP 序列化要求我们要以一种特别的方式来编写我们的类:
- 你的类必须要有个构造器,这个构造器需要有你所有想要以被序列化的形式记录的所有属性。之所以需要这样是为了序列化框架能够重现你的类的实例
- 如果提供了不止一个构造器的话,序列化框架需要知道应该使用哪一个。
@ConstructorForDeserialization
注解可以用来指定选择的构造器。对于一个没有`@ConstructorForDeserialization
注解的 Kotlin 的类,主的构造器会被选择 - 类必须要含有 .class 文件中的参数名字,从而被编译。这在 Kotlin 中是默认的但是在 Java 中必须要被开启(对于 javac 使用
`-parameters
命令行选项) - 你的类对于在构造器中的每个属性都需要提供一个 Java Bean getter,而且名字要跟构造器中的一样。比如,如果一个类含有一个构造器参数
foo
,那么必须要有一个名字为getFoo()
的 getter。如果foo
是一个 boolean 类型的,那么 getter 可能需要被命名为isFoo()`
。这也是为什么类必须要以将参数名字开启的方式被编译 - 类必须要有
`@CordaSerializable
的注解 - 必须要支持针对于构造器参数/getters 定义的类型,当有 generics 被使用的时候, generic 参数也必须是一个被支持的类型,一个打开的通配符()或者一个现在对一个打开的通配符进行扩展的有限的通配符
- 任何的超级类(superclass)也必须要遵循这个原则,但是可以是个抽象类
- 对象 graph 周期当前还不支持,所以一个对象是不能够直接或间接的引用它自己的
CorDapp constraints migration¶
注解
Before reading this page, you should be familiar with the key concepts of Contract Constraints.
Corda 4 introduces and recommends building signed CorDapps that issue states with signature constraints. Existing on ledger states issued before Corda 4 are not automatically transitioned to new signature constraints when building transactions in Corda 4. This document explains how to modify existing CorDapp flows to explicitly consume and evolve pre Corda 4 states, and outlines a future mechanism where such states will transition automatically (without explicit migration code).
Faced with the exercise of upgrading an existing Corda 3.x CorDapp to Corda 4, you need to consider the following:
What existing unconsumed states have been issued on ledger by a previous version of this CorDapp and using other constraint types?
If you have existing hash constrained states see Migrating hash constraints.
If you have existing CZ whitelisted constrained states see Migrating CZ whitelisted constraints.
If you have existing always accept constrained states these are not consumable nor evolvable as they offer no security and should only be used in test environments.
What type of contract states does my CorDapp use?
Linear states typically evolve over an extended period of time (defined by the lifecycle of the associated business use case), and thus are prime candidates for constraints migration.
Fungible states are created by an issuer and transferred around a Corda network until explicitly exited (by the same issuer). They do not evolve as linear states, but are transferred between participants on a network. Their consumption may produce additional new output states to represent adjustments to the original state (e.g. change when spending cash). For the purposes of constraints migration, it is desirable that any new output states are produced using the new Corda 4 signature constraint types.
Where you have long transaction chains of fungible states, it may be advisable to send them back to the issuer for re-issuance (this is called “chain snipping” and has performance advantages as well as simplifying constraints type migration).
Should I use the implicit or explicit upgrade path?
The general recommendation for Corda 4 is to use implicit upgrades for the reasons described here.
Implicit upgrades allow pre-authorising multiple implementations of the contract ahead of time. They do not require additional coding and do not incur a complex choreographed operational upgrade process.
警告
The steps outlined in this page assume you are using the same CorDapp Contract (eg. same state definition, commands and verification code) and wish to use that CorDapp to leverage the upgradeability benefits of Corda 4 signature constraints. If you are looking to upgrade code within an existing Contract CorDapp please read Contract and state versioning and CorDapp Upgradeability Guarantees to understand your options.
Please also remember that states are always consumable if the version of the CorDapp that issued (created) them is installed. In the simplest of scenarios it may be easier to re-issue existing hash or CZ whitelist constrained states (eg. exit them from the ledger using the original unsigned CorDapp and re-issuing them using the new signed CorDapp).
Hash constraints migration¶
注解
These instructions only apply to CorDapp Contract JARs (unless otherwise stated).
Corda 4.0¶
Corda 4.0 requires some additional steps to consume and evolve pre-existing on-ledger hash constrained states:
- All Corda Nodes in the same CZ or business network that may encounter a transaction chain with a hash constrained state must be started using relaxed hash constraint checking mode as described in 在私有网络中的 Hash 约束的 states.
- CorDapp flows that build transactions using pre-existing hash-constrained states must explicitly set output states to use signature constraints and specify the related public key(s) used in signing the associated CorDapp Contract JAR:
// This will read the signers for the deployed CorDapp.
val attachment = this.serviceHub.cordappProvider.getContractAttachmentID(contractClass)
val signers = this.serviceHub.attachments.openAttachment(attachment!!)!!.signerKeys
// Create the key that will have to pass for all future versions.
val ownersKey = signers.first()
val txBuilder = TransactionBuilder(notary)
// Set the Signature constraint on the new state to migrate away from the hash constraint.
.addOutputState(outputState, constraint = SignatureAttachmentConstraint(ownersKey))
// This will read the signers for the deployed CorDapp.
SecureHash attachment = this.getServiceHub().getCordappProvider().getContractAttachmentID(contractClass);
List<PublicKey> signers = this.getServiceHub().getAttachments().openAttachment(attachment).getSignerKeys();
// Create the key that will have to pass for all future versions.
PublicKey ownersKey = signers.get(0);
TransactionBuilder txBuilder = new TransactionBuilder(notary)
// Set the Signature constraint on the new state to migrate away from the hash constraint.
.addOutputState(outputState, myContract, new SignatureAttachmentConstraint(ownersKey))
- As a node operator you need to add the new signed version of the contracts CorDapp to the
/cordapps
folder together with the latest version of the flows jar. Please also ensure that the original unsigned contracts CorDapp is removed from the/cordapps
folder (this will already be present in the nodes attachments store) to ensure the lookup code in step 2 retrieves the correct signed contract CorDapp JAR.
Later releases¶
The next version of Corda will provide automatic transition of hash constrained states. This means that signed CorDapps running on a Corda 4.x node will automatically propagate any pre-existing on-ledger hash-constrained states (and generate signature-constrained outputs) when the system property to break constraints is set.
CZ whitelisted constraints migration¶
注解
These instructions only apply to CorDapp Contract JARs (unless otherwise stated).
Corda 4.0¶
Corda 4.0 requires some additional steps to consume and evolve pre-existing on-ledger CZ whitelisted constrained states:
As the original developer of the CorDapp, the first step is to sign the latest version of the JAR that was released (see Building and installing a CorDapp). The key used for signing will be used to sign all subsequent releases, so it should be stored appropriately. The JAR can be signed by multiple keys owned by different parties and it will be expressed as a
CompositeKey
in theSignatureAttachmentConstraint
(See API: 核心类型).The new Corda 4 signed CorDapp JAR must be registered with the CZ network operator (as whitelisted in the network parameters which are distributed to all nodes in that CZ). The CZ network operator should check that the JAR is signed and not allow any more versions of it to be whitelisted in the future. From now on the development organisation that signed the JAR is responsible for signing new versions.
The process of CZ network CorDapp whitelisting depends on how the Corda network is configured:
- if using a hosted CZ network (such as The Corda Network or UAT Environment ) running an Identity Operator (formerly known as Doorman) and Network Map Service, you should manually send the hashes of the two JARs to the CZ network operator and request these be added using their network parameter update process.
- if using a local network created using the Network Bootstrapper tool, please follow the instructions in Updating the contract whitelist for bootstrapped networks to can add both CorDapp Contract JAR hashes.
Any flows that build transactions using this CorDapp will have the responsibility of transitioning states to the
SignatureAttachmentConstraint
. This is done explicitly in the code by setting the constraint of the output states to signers of the latest version of the whitelisted jar:
// This will read the signers for the deployed CorDapp.
val attachment = this.serviceHub.cordappProvider.getContractAttachmentID(contractClass)
val signers = this.serviceHub.attachments.openAttachment(attachment!!)!!.signerKeys
// Create the key that will have to pass for all future versions.
val ownersKey = signers.first()
val txBuilder = TransactionBuilder(notary)
// Set the Signature constraint on the new state to migrate away from the WhitelistConstraint.
.addOutputState(outputState, constraint = SignatureAttachmentConstraint(ownersKey))
// This will read the signers for the deployed CorDapp.
SecureHash attachment = this.getServiceHub().getCordappProvider().getContractAttachmentID(contractClass);
List<PublicKey> signers = this.getServiceHub().getAttachments().openAttachment(attachment).getSignerKeys();
// Create the key that will have to pass for all future versions.
PublicKey ownersKey = signers.get(0);
TransactionBuilder txBuilder = new TransactionBuilder(notary)
// Set the Signature constraint on the new state to migrate away from the WhitelistConstraint.
.addOutputState(outputState, myContract, new SignatureAttachmentConstraint(ownersKey))
- As a node operator you need to add the new signed version of the contracts CorDapp to the
/cordapps
folder together with the latest version of the flows jar. Please also ensure that the original unsigned contracts CorDapp is removed from the/cordapps
folder (this will already be present in the nodes attachments store) to ensure the lookup code in step 3 retrieves the correct signed contract CorDapp JAR.
Later releases¶
The next version of Corda will provide automatic transition of CZ whitelisted constrained states. This means that signed CorDapps running on a Corda 4.x node will automatically propagate any pre-existing on-ledger CZ whitelisted constrained states (and generate signature constrained outputs).
CorDapp Upgradeability Guarantees¶
Corda 4.0¶
Corda 4 introduces a number of advanced features (such as signature constraints), and data security model improvements (such as attachments trust checking and classloader isolation of contract attachments for transaction building and verification).
The following guarantees are made for CorDapps running on Corda 4.0
Compliant CorDapps compiled with previous versions of Corda (from 3.0) will execute without change on Corda 4.0
注解
by “compliant”, we mean CorDapps that do not utilise Corda internal, non-stable or other non-committed public Corda APIs.
Recommendation: security hardening changes in flow processing, specifically the
FinalityFlow
, recommend upgrading existing CorDapp receiver flows to use the new APIs and thus opting in to platform version 4. See Step 5. Security: Upgrade your use of FinalityFlow for more information.All constraint types (hash, CZ whitelisted, signature) are consumable within the same transaction if there is an associated contract attachment that satisfies all of them.
CorDapp Contract states generated on ledger using hash constraints are not directly migratable to signature constraints in this release. Your compatibility zone operator may whitelist a JAR previously used to issue hash constrained states, and then you can follow the manual process described in the paragraph below to migrate these to signature constraints. See CorDapp constraints migration for more information.
CorDapp Contract states generated on ledger using CZ whitelisted constraints are migratable to signature constraints using a manual process that requires programmatic code changes. See CZ whitelisted constraints migration for more information.
Explicit Contract Upgrades are only supported for hash and CZ whitelisted constraint types. See 进行显式的 contract 和 state 升级 for more information.
CorDapp contract attachments are not trusted from remote peers over the p2p network for the purpose of transaction verification. A node operator must locally install all versions of a Contract attachment to be able to resolve a chain of contract states from its original version. The RPC
uploadAttachment
mechanism can be used to achieve this (as well as conventional loading of a CorDapp by installing it in the nodes /cordapp directory). See Installing the CorDapp JAR and CorDapp Contract Attachments for more information.CorDapp contract attachment classloader isolation has some important side-effects and edge cases to consider:
- Contract attachments should include all 3rd party library dependencies in the same packaged JAR - we call this a “Fat JAR”, meaning that all dependencies are resolvable by the classloader by only loading a single JAR.
- Contract attachments that depend on other Contract attachments from a different packaged JAR are currently supported in so far as the Attachments Classloader will attempt to resolve any external dependencies from the node’s application classloader. It is thus paramount that dependent Contract Attachments are loaded upon node startup from the respective /cordapps directory.
Rolling upgrades are partially supported. A Node operator may choose to manually upload (via the RPC attachments uploader mechanism) a later version of a Contract Attachment than the version their node is currently using for the purposes of transaction verification (received from remote peers). However, they will only be able to build new transactions with the version that is currently loaded (installed from the nodes /cordapps directory) in their node.
Finance CorDapp (v4) Whilst experimental, our test coverage has confirmed that states generated with the Finance CorDapp are interchangeable across Open Source and Enterprise distributions. This has been made possible by releasing a single 4.0 version of the Finance Contracts CorDapp. Please note the Finance application will be superseded shortly by the new Tokens SDK (https://github.com/corda/token-sdk)
Later releases¶
The following additional capabilities are under consideration for delivery in follow-up releases to Corda 4.0:
- CorDapp Contract states generated on ledger using hash constraints will be automatically migrated to signature constraints when building new transactions where the latest installed contract Jar is signed as per CorDapp Jar signing.
- CorDapp Contract states generated on ledger using CZ whitelisted constraints will be automatically migrated to signature constraints when building new transactions where the latest installed contract Jar is signed as per CorDapp Jar signing.
- Explicit Contract Upgrades will be supported for all constraint types: hash, CZ whitelisted and signature. In practice, it should only be necessary to upgrade from hash or CZ whitelisted to new signature constrained contract types. signature constrained Contracts are upgradeable seamlessly (through built in serialization and code signing controls) without requiring explicit upgrades.
- Contract attachments will be able to explicitly declare their dependencies on other Contract attachments such that these are automatically loaded by the Attachments Classloader (rendering the 4.0 fallback to application classloader mechanism redundant). This improved modularity removes the need to “Fat JAR” all dependencies together in a single jar.
- Rolling upgrades will be fully supported. A Node operator will be able to pre-register (by hash or code signing public key) versions of CorDapps they are not yet ready to install locally, but wish to use for the purposes of transaction verification with peers running later versions of a CorDapp.
注解
Trusted downloading and execution of contract attachments from remote peers will not be integrated until secure JVM sand-boxing is available.
安全编码指南¶
The platform does what it can to be secure by default and safe by design. Unfortunately the platform cannot prevent every kind of security mistake. This document describes what to think about when writing applications to block various kinds of attack. Whilst it may be tempting to just assume no reasonable counterparty would attempt to subvert your trades using flow level attacks, relying on trust for software security makes it harder to scale up your operations later when you might want to add counterparties quickly and without extensive vetting.
Corda 平台的设计默认已经考虑到安全因素。但是不幸的是平台没办法避免每种安全的错误。这篇文章描述了当编写应用的时候,都要考虑哪些方面来阻止各种类型的攻击。我们假设对方会尝试着使用 flow 级别的攻击来破坏你的交易,当你可能要快速添加更多的合作伙伴并且没有过多的验证的时候,对于软件安全的依赖会让后来的维护工作变得很难。
Flows¶
Writing flows are how your app communicates with other parties on the network. Therefore they are the typical entry point for malicious data into your app and must be treated with care.
Writing flows 是你的应用如何同网络中的其他节点进行沟通的方式。因此他们是恶意数据进入到你的应用的一个典型的入口,所以必须要谨慎地对待。
The receive
methods return data wrapped in the UntrustworthyData<T>
marker type. This type doesn’t add
any functionality, it’s only there to remind you to properly validate everything that you get from the network.
Remember that the other side may not be running the code you provide to take part in the flow: they are
allowed to do anything! Things to watch out for:
- A transaction that doesn’t match a partial transaction built or proposed earlier in the flow, for instance, if you propose to trade a cash state worth $100 for an asset, and the transaction to sign comes back from the other side, you must check that it points to the state you actually requested. Otherwise the attacker could get you to sign a transaction that spends a much larger state to you, if they know the ID of one!
- A transaction that isn’t of the right type. There are two transaction types: general and notary change. If you are expecting one type but get the other you may find yourself signing a transaction that transfers your assets to the control of a hostile notary.
- Unexpected changes in any part of the states in a transaction. If you have access to all the needed data, you could re-run the builder logic and do a comparison of the resulting states to ensure that it’s what you expected. For instance if the data needed to construct the next state is available to both parties, the function to calculate the transaction you want to mutually agree could be shared between both classes implementing both sides of the flow.
receive
方法返回的数据会被包装在 UntrustworthyData<T>
marker 类型中。这个类型并没有添加任何的方法,它被放在这仅仅是为了提醒你对于从网络中获得的任何信息都要正确的验证。记住,在另一端的节点可能并没有运行你提供给他的代码:他们被允许做任何事情!你需要注意的事情包括:
- 一个同部分的 transaction build 或者之前在 flow 中提出的 transaction 不匹配,比如你提出了一个对于某个资产(asset)价值 $100 现金 state 的交易,然后这个 transaction 从对方节点签了名被返回来,你必须要检查它确实只想了你真正请求的那个 state。否则的话如果黑客知道某个资产的 ID,那么他们会欺骗你为一个会花费你更多的 state 的 transaction 签名。
- 一个不是正确类型的 transaction。这里主要有两种 transaction 类型:通用类型和 notary 变更类型。如果你期望收到一种类型但是得到的是另外一种类型,那么你可能会发现自己为一个将自己的资产转移到了 恶意的一个 notary 的 transaction 提供了签名。
- 在一个 transaction 中的任何的 states 中存在非期望的变动。如果你对所有需要的数据都有访问权限的话,你可以重新运行 builder 逻辑,然后将结果 states 进行对比来确保它确实是你期望得到的。比如如果用来构建下一个 state 的数据对于双方都是可用的话,用来对你想要互相同意的 transaction 进行计算的方法可以在 flow 的两方被共享。
The theme should be clear: signing is a very sensitive operation, so you need to be sure you know what it is you are about to sign, and that nothing has changed in the small print! Once you have provided your signature over a transaction to a counterparty, there is no longer anything you can do to prevent them from committing it to the ledger.
思路应该很清晰了:提供签名是一个非常敏感的操作,所以你必须要确定你将要提供签名的是什么,并且在 small print 中没有什么被改动了!一旦你向对方在一个 transaction 中提供了签名,你就没法做任何事情来阻止这些提交到账本中了。
Contracts¶
Contracts are arbitrary functions inside a JVM sandbox and therefore they have a lot of leeway to shoot themselves in the foot. Things to watch out for:
- Changes in states that should not be allowed by the current state transition. You will want to check that no fields are changing except the intended fields!
- Accidentally catching and discarding exceptions that might be thrown by validation logic.
- Calling into other contracts via virtual methods if you don’t know what those other contracts are or might do.
合约是在一个 JVM 沙盒(sandbox)中的任何的方法,因此 they have a lot of leeway to shoot themselves in the foot。需要关注的点包括:
- 在当前的 state 交换中是不允许 states 的变动的。你需要检查除了那些想要去改动的字段以外,是不应该有其他字段的变动的。
- 意外的捕获和忽略可能会由验证逻辑抛出来的异常。
- 如果你不知道其他的合约是什么或者会做什么的话,那么通过虚拟方法来调用其它的合约
Configuring Responder Flows¶
A flow can be a fairly complex thing that interacts with many backend systems, and so it is likely that different users of a specific CordApp will require differences in how flows interact with their specific infrastructure.
Corda supports this functionality by providing two mechanisms to modify the behaviour of apps in your node.
Subclassing a Flow¶
If you have a workflow which is mostly common, but also requires slight alterations in specific situations, most developers would be familiar with refactoring into Base and Sub classes. A simple example is shown below.
@InitiatedBy(Initiator::class)
open class BaseResponder(internal val otherSideSession: FlowSession) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
otherSideSession.send(getMessage())
}
protected open fun getMessage() = "This Is the Legacy Responder"
}
@InitiatedBy(Initiator::class)
class SubResponder(otherSideSession: FlowSession) : BaseResponder(otherSideSession) {
override fun getMessage(): String {
return "This is the sub responder"
}
}
@InitiatingFlow
public class Initiator extends FlowLogic<String> {
private final Party otherSide;
public Initiator(Party otherSide) {
this.otherSide = otherSide;
}
@Override
public String call() throws FlowException {
return initiateFlow(otherSide).receive(String.class).unwrap((it) -> it);
}
}
@InitiatedBy(Initiator.class)
public class BaseResponder extends FlowLogic<Void> {
private FlowSession counterpartySession;
public BaseResponder(FlowSession counterpartySession) {
super();
this.counterpartySession = counterpartySession;
}
@Override
public Void call() throws FlowException {
counterpartySession.send(getMessage());
return Void;
}
protected String getMessage() {
return "This Is the Legacy Responder";
}
}
@InitiatedBy(Initiator.class)
public class SubResponder extends BaseResponder {
public SubResponder(FlowSession counterpartySession) {
super(counterpartySession);
}
@Override
protected String getMessage() {
return "This is the sub responder";
}
}
Corda would detect that both BaseResponder
and SubResponder
are configured for responding to Initiator
.
Corda will then calculate the hops to FlowLogic
and select the implementation which is furthest distance, ie: the most subclassed implementation.
In the above example, SubResponder
would be selected as the default responder for Initiator
注解
The flows do not need to be within the same CordApp, or package, therefore to customise a shared app you obtained from a third party, you’d write your own CorDapp that subclasses the first.
Overriding a flow via node configuration¶
Whilst the subclassing approach is likely to be useful for most applications, there is another mechanism to override this behaviour. This would be useful if for example, a specific CordApp user requires such a different responder that subclassing an existing flow would not be a good solution. In this case, it’s possible to specify a hardcoded flow via the node configuration.
注解
A new responder written to override an existing responder must still be annotated with @InitiatedBy
referencing the base initiator.
The configuration section is named flowOverrides
and it accepts an array of overrides
flowOverrides {
overrides=[
{
initiator="net.corda.Initiator"
responder="net.corda.BaseResponder"
}
]
}
The cordform plugin also provides a flowOverride
method within the deployNodes
block which can be used to override a flow. In the below example, we will override
the SubResponder
with BaseResponder
node {
name "O=Bank,L=London,C=GB"
p2pPort 10025
rpcUsers = ext.rpcUsers
rpcSettings {
address "localhost:10026"
adminAddress "localhost:10027"
}
extraConfig = ['h2Settings.address' : 'localhost:10035']
flowOverride("net.corda.Initiator", "net.corda.BaseResponder")
}
This will generate the corresponding flowOverrides
section and place it in the configuration for that node.
Modifying the behaviour of @InitiatingFlow(s)¶
It is likely that initiating flows will also require changes to reflect the different systems that are likely to be encountered. At the moment, corda provides the ability to subclass an Initiator, and ensures that the correct responder will be invoked. In the below example, we will change the behaviour of an Initiator from filtering Notaries out from comms, to only communicating with Notaries
@InitiatingFlow @StartableByRPC @StartableByService open class BaseInitiator : FlowLogic<String>() { @Suspendable override fun call(): String { val partiesToTalkTo = serviceHub.networkMapCache.allNodes .filterNot { it.legalIdentities.first() in serviceHub.networkMapCache.notaryIdentities } .filterNot { it.legalIdentities.first().name == ourIdentity.name }.map { it.legalIdentities.first() } val responses = ArrayList<String>() for (party in partiesToTalkTo) { val session = initiateFlow(party) val received = session.receive<String>().unwrap { it } responses.add(party.name.toString() + " responded with backend: " + received) } return "${getFLowName()} received the following \n" + responses.joinToString("\n") { it } } open fun getFLowName(): String { return "Normal Computer" } } @StartableByRPC @StartableByService class NotaryOnlyInitiator : BaseInitiator() { @Suspendable override fun call(): String { return "Notary Communicator received:\n" + serviceHub.networkMapCache.notaryIdentities.map { "Notary: ${it.name.organisation} is using a " + initiateFlow(it).receive<String>().unwrap { it } }.joinToString("\n") { it } }
警告
The subclass must not have the @InitiatingFlow annotation.
Corda will use the first annotation detected in the class hierarchy to determine which responder should be invoked. So for a Responder similar to
@InitiatedBy(BaseInitiator::class) class BobbyResponder(othersideSession: FlowSession) : BaseResponder(othersideSession) { override fun getMessageFromBackend(): String { return "Robert'); DROP TABLE STATES;" } }
it would be possible to invoke either BaseInitiator
or NotaryOnlyInitiator
and BobbyResponder
would be used to reply.
警告
You must ensure the sequence of sends/receives/subFlows in a subclass are compatible with the parent.
Flow 菜谱¶
This flow showcases how to use Corda’s API, in both Java and Kotlin.
这里展示了如何使用 Corda 的 API,包括 Java 和 Kotlin。
@file:Suppress("UNUSED_VARIABLE", "unused", "DEPRECATION")
package net.corda.docs.kotlin
import co.paralleluniverse.fibers.Suspendable
import net.corda.core.contracts.*
import net.corda.core.crypto.SecureHash
import net.corda.core.crypto.TransactionSignature
import net.corda.core.crypto.generateKeyPair
import net.corda.core.flows.*
import net.corda.core.identity.CordaX500Name
import net.corda.core.identity.Party
import net.corda.core.identity.PartyAndCertificate
import net.corda.core.internal.FetchDataFlow
import net.corda.core.node.services.Vault.Page
import net.corda.core.node.services.queryBy
import net.corda.core.node.services.vault.QueryCriteria.VaultQueryCriteria
import net.corda.core.transactions.LedgerTransaction
import net.corda.core.transactions.SignedTransaction
import net.corda.core.transactions.TransactionBuilder
import net.corda.core.utilities.ProgressTracker
import net.corda.core.utilities.ProgressTracker.Step
import net.corda.core.utilities.UntrustworthyData
import net.corda.core.utilities.seconds
import net.corda.core.utilities.unwrap
import net.corda.finance.contracts.asset.Cash
import net.corda.testing.contracts.DummyContract
import net.corda.testing.contracts.DummyState
import java.security.PublicKey
import java.security.Signature
import java.time.Instant
// ``InitiatorFlow`` is our first flow, and will communicate with
// ``ResponderFlow``, below.
// We mark ``InitiatorFlow`` as an ``InitiatingFlow``, allowing it to be
// started directly by the node.
@InitiatingFlow
// We also mark ``InitiatorFlow`` as ``StartableByRPC``, allowing the
// node's owner to start the flow via RPC.
@StartableByRPC
// Every flow must subclass ``FlowLogic``. The generic indicates the
// flow's return type.
class InitiatorFlow(val arg1: Boolean, val arg2: Int, private val counterparty: Party, val regulator: Party) : FlowLogic<Unit>() {
/**---------------------------------
* WIRING UP THE PROGRESS TRACKER *
---------------------------------**/
// Giving our flow a progress tracker allows us to see the flow's
// progress visually in our node's CRaSH shell.
// DOCSTART 17
companion object {
object ID_OTHER_NODES : Step("Identifying other nodes on the network.")
object SENDING_AND_RECEIVING_DATA : Step("Sending data between parties.")
object EXTRACTING_VAULT_STATES : Step("Extracting states from the vault.")
object OTHER_TX_COMPONENTS : Step("Gathering a transaction's other components.")
object TX_BUILDING : Step("Building a transaction.")
object TX_SIGNING : Step("Signing a transaction.")
object TX_VERIFICATION : Step("Verifying a transaction.")
object SIGS_GATHERING : Step("Gathering a transaction's signatures.") {
// Wiring up a child progress tracker allows us to see the
// subflow's progress steps in our flow's progress tracker.
override fun childProgressTracker() = CollectSignaturesFlow.tracker()
}
object VERIFYING_SIGS : Step("Verifying a transaction's signatures.")
object FINALISATION : Step("Finalising a transaction.") {
override fun childProgressTracker() = FinalityFlow.tracker()
}
fun tracker() = ProgressTracker(
ID_OTHER_NODES,
SENDING_AND_RECEIVING_DATA,
EXTRACTING_VAULT_STATES,
OTHER_TX_COMPONENTS,
TX_BUILDING,
TX_SIGNING,
TX_VERIFICATION,
SIGS_GATHERING,
VERIFYING_SIGS,
FINALISATION
)
}
// DOCEND 17
override val progressTracker: ProgressTracker = tracker()
@Suppress("RemoveExplicitTypeArguments")
@Suspendable
override fun call() {
// We'll be using a dummy public key for demonstration purposes.
val dummyPubKey: PublicKey = generateKeyPair().public
/**--------------------------
* IDENTIFYING OTHER NODES *
--------------------------**/
// DOCSTART 18
progressTracker.currentStep = ID_OTHER_NODES
// DOCEND 18
// A transaction generally needs a notary:
// - To prevent double-spends if the transaction has inputs
// - To serve as a timestamping authority if the transaction has a
// time-window
// We retrieve the notary from the network map.
// DOCSTART 01
val notaryName: CordaX500Name = CordaX500Name(
organisation = "Notary Service",
locality = "London",
country = "GB")
val specificNotary: Party = serviceHub.networkMapCache.getNotary(notaryName)!!
// Alternatively, we can pick an arbitrary notary from the notary
// list. However, it is always preferable to specify the notary
// explicitly, as the notary list might change when new notaries are
// introduced, or old ones decommissioned.
val firstNotary: Party = serviceHub.networkMapCache.notaryIdentities.first()
// DOCEND 01
// We may also need to identify a specific counterparty. We do so
// using the identity service.
// DOCSTART 02
val counterpartyName: CordaX500Name = CordaX500Name(
organisation = "NodeA",
locality = "London",
country = "GB")
val namedCounterparty: Party = serviceHub.identityService.wellKnownPartyFromX500Name(counterpartyName) ?:
throw IllegalArgumentException("Couldn't find counterparty for NodeA in identity service")
val keyedCounterparty: Party = serviceHub.identityService.partyFromKey(dummyPubKey) ?:
throw IllegalArgumentException("Couldn't find counterparty with key: $dummyPubKey in identity service")
// DOCEND 02
/**-----------------------------
* SENDING AND RECEIVING DATA *
-----------------------------**/
progressTracker.currentStep = SENDING_AND_RECEIVING_DATA
// We start by initiating a flow session with the counterparty. We
// will use this session to send and receive messages from the
// counterparty.
// DOCSTART initiateFlow
val counterpartySession: FlowSession = initiateFlow(counterparty)
// DOCEND initiateFlow
// We can send arbitrary data to a counterparty.
// If this is the first ``send``, the counterparty will either:
// 1. Ignore the message if they are not registered to respond
// to messages from this flow.
// 2. Start the flow they have registered to respond to this flow,
// and run the flow until the first call to ``receive``, at
// which point they process the message.
// In other words, we are assuming that the counterparty is
// registered to respond to this flow, and has a corresponding
// ``receive`` call.
// DOCSTART 04
counterpartySession.send(Any())
// DOCEND 04
// We can wait to receive arbitrary data of a specific type from a
// counterparty. Again, this implies a corresponding ``send`` call
// in the counterparty's flow. A few scenarios:
// - We never receive a message back. In the current design, the
// flow is paused until the node's owner kills the flow.
// - Instead of sending a message back, the counterparty throws a
// ``FlowException``. This exception is propagated back to us,
// and we can use the error message to establish what happened.
// - We receive a message back, but it's of the wrong type. In
// this case, a ``FlowException`` is thrown.
// - We receive back a message of the correct type. All is good.
//
// Upon calling ``receive()`` (or ``sendAndReceive()``), the
// ``FlowLogic`` is suspended until it receives a response.
//
// We receive the data wrapped in an ``UntrustworthyData``
// instance. This is a reminder that the data we receive may not
// be what it appears to be! We must unwrap the
// ``UntrustworthyData`` using a lambda.
// DOCSTART 05
val packet1: UntrustworthyData<Int> = counterpartySession.receive<Int>()
val int: Int = packet1.unwrap { data ->
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
data
}
// DOCEND 05
// We can also use a single call to send data to a counterparty
// and wait to receive data of a specific type back. The type of
// data sent doesn't need to match the type of the data received
// back.
// DOCSTART 07
val packet2: UntrustworthyData<Boolean> = counterpartySession.sendAndReceive<Boolean>("You can send and receive any class!")
val boolean: Boolean = packet2.unwrap { data ->
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
data
}
// DOCEND 07
// We're not limited to sending to and receiving from a single
// counterparty. A flow can send messages to as many parties as it
// likes, and each party can invoke a different response flow.
// DOCSTART 06
val regulatorSession: FlowSession = initiateFlow(regulator)
regulatorSession.send(Any())
val packet3: UntrustworthyData<Any> = regulatorSession.receive<Any>()
// DOCEND 06
// We may also batch receives in order to increase performance. This
// ensures that only a single checkpoint is created for all received
// messages.
// Type-safe variant:
val signatures: List<UntrustworthyData<Signature>> =
receiveAll(Signature::class.java, listOf(counterpartySession, regulatorSession))
// Dynamic variant:
val messages: Map<FlowSession, UntrustworthyData<*>> =
receiveAllMap(mapOf(
counterpartySession to Boolean::class.java,
regulatorSession to String::class.java
))
/**-----------------------------------
* EXTRACTING STATES FROM THE VAULT *
-----------------------------------**/
progressTracker.currentStep = EXTRACTING_VAULT_STATES
// Let's assume there are already some ``DummyState``s in our
// node's vault, stored there as a result of running past flows,
// and we want to consume them in a transaction. There are many
// ways to extract these states from our vault.
// For example, we would extract any unconsumed ``DummyState``s
// from our vault as follows:
val criteria: VaultQueryCriteria = VaultQueryCriteria() // default is UNCONSUMED
val results: Page<DummyState> = serviceHub.vaultService.queryBy<DummyState>(criteria)
val dummyStates: List<StateAndRef<DummyState>> = results.states
// For a full list of the available ways of extracting states from
// the vault, see the Vault Query docs page.
// When building a transaction, input states are passed in as
// ``StateRef`` instances, which pair the hash of the transaction
// that generated the state with the state's index in the outputs
// of that transaction. In practice, we'd pass the transaction hash
// or the ``StateRef`` as a parameter to the flow, or extract the
// ``StateRef`` from our vault.
// DOCSTART 20
val ourStateRef: StateRef = StateRef(SecureHash.sha256("DummyTransactionHash"), 0)
// DOCEND 20
// A ``StateAndRef`` pairs a ``StateRef`` with the state it points to.
// DOCSTART 21
val ourStateAndRef: StateAndRef<DummyState> = serviceHub.toStateAndRef<DummyState>(ourStateRef)
// DOCEND 21
/**-----------------------------------------
* GATHERING OTHER TRANSACTION COMPONENTS *
-----------------------------------------**/
progressTracker.currentStep = OTHER_TX_COMPONENTS
// Reference input states are constructed from StateAndRefs.
// DOCSTART 55
val referenceState: ReferencedStateAndRef<DummyState> = ourStateAndRef.referenced()
// DOCEND 55
// Output states are constructed from scratch.
// DOCSTART 22
val ourOutputState: DummyState = DummyState()
// DOCEND 22
// Or as copies of other states with some properties changed.
// DOCSTART 23
val ourOtherOutputState: DummyState = ourOutputState.copy(magicNumber = 77)
// DOCEND 23
// We then need to pair our output state with a contract.
// DOCSTART 47
val ourOutput: StateAndContract = StateAndContract(ourOutputState, DummyContract.PROGRAM_ID)
// DOCEND 47
// Commands pair a ``CommandData`` instance with a list of
// public keys. To be valid, the transaction requires a signature
// matching every public key in all of the transaction's commands.
// DOCSTART 24
val commandData: DummyContract.Commands.Create = DummyContract.Commands.Create()
val ourPubKey: PublicKey = serviceHub.myInfo.legalIdentitiesAndCerts.first().owningKey
val counterpartyPubKey: PublicKey = counterparty.owningKey
val requiredSigners: List<PublicKey> = listOf(ourPubKey, counterpartyPubKey)
val ourCommand: Command<DummyContract.Commands.Create> = Command(commandData, requiredSigners)
// DOCEND 24
// ``CommandData`` can either be:
// 1. Of type ``TypeOnlyCommandData``, in which case it only
// serves to attach signers to the transaction and possibly
// fork the contract's verification logic.
val typeOnlyCommandData: TypeOnlyCommandData = DummyContract.Commands.Create()
// 2. Include additional data which can be used by the contract
// during verification, alongside fulfilling the roles above.
val commandDataWithData: CommandData = Cash.Commands.Issue()
// Attachments are identified by their hash.
// The attachment with the corresponding hash must have been
// uploaded ahead of time via the node's RPC interface.
// DOCSTART 25
val ourAttachment: SecureHash = SecureHash.sha256("DummyAttachment")
// DOCEND 25
// Time windows can have a start and end time, or be open at either end.
// DOCSTART 26
val ourTimeWindow: TimeWindow = TimeWindow.between(Instant.MIN, Instant.MAX)
val ourAfter: TimeWindow = TimeWindow.fromOnly(Instant.MIN)
val ourBefore: TimeWindow = TimeWindow.untilOnly(Instant.MAX)
// DOCEND 26
// We can also define a time window as an ``Instant`` +/- a time
// tolerance (e.g. 30 seconds):
// DOCSTART 42
val ourTimeWindow2: TimeWindow = TimeWindow.withTolerance(serviceHub.clock.instant(), 30.seconds)
// DOCEND 42
// Or as a start-time plus a duration:
// DOCSTART 43
val ourTimeWindow3: TimeWindow = TimeWindow.fromStartAndDuration(serviceHub.clock.instant(), 30.seconds)
// DOCEND 43
/**-----------------------
* TRANSACTION BUILDING *
-----------------------**/
progressTracker.currentStep = TX_BUILDING
// If our transaction has input states or a time-window, we must instantiate it with a
// notary.
// DOCSTART 19
val txBuilder: TransactionBuilder = TransactionBuilder(specificNotary)
// DOCEND 19
// Otherwise, we can choose to instantiate it without one:
// DOCSTART 46
val txBuilderNoNotary: TransactionBuilder = TransactionBuilder()
// DOCEND 46
// We add items to the transaction builder using ``TransactionBuilder.withItems``:
// DOCSTART 27
txBuilder.withItems(
// Inputs, as ``StateAndRef``s that reference the outputs of previous transactions
ourStateAndRef,
// Outputs, as ``StateAndContract``s
ourOutput,
// Commands, as ``Command``s
ourCommand,
// Attachments, as ``SecureHash``es
ourAttachment,
// A time-window, as ``TimeWindow``
ourTimeWindow
)
// DOCEND 27
// We can also add items using methods for the individual components.
// The individual methods for adding input states and attachments:
// DOCSTART 28
txBuilder.addInputState(ourStateAndRef)
txBuilder.addAttachment(ourAttachment)
// DOCEND 28
// An output state can be added as a ``ContractState``, contract class name and notary.
// DOCSTART 49
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary)
// DOCEND 49
// We can also leave the notary field blank, in which case the transaction's default
// notary is used.
// DOCSTART 50
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID)
// DOCEND 50
// Or we can add the output state as a ``TransactionState``, which already specifies
// the output's contract and notary.
// DOCSTART 51
val txState: TransactionState<DummyState> = TransactionState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary)
// DOCEND 51
// Commands can be added as ``Command``s.
// DOCSTART 52
txBuilder.addCommand(ourCommand)
// DOCEND 52
// Or as ``CommandData`` and a ``vararg PublicKey``.
// DOCSTART 53
txBuilder.addCommand(commandData, ourPubKey, counterpartyPubKey)
// DOCEND 53
// We can set a time-window directly.
// DOCSTART 44
txBuilder.setTimeWindow(ourTimeWindow)
// DOCEND 44
// Or as a start time plus a duration (e.g. 45 seconds).
// DOCSTART 45
txBuilder.setTimeWindow(serviceHub.clock.instant(), 45.seconds)
// DOCEND 45
/**----------------------
* TRANSACTION SIGNING *
----------------------**/
progressTracker.currentStep = TX_SIGNING
// We finalise the transaction by signing it, converting it into a
// ``SignedTransaction``.
// DOCSTART 29
val onceSignedTx: SignedTransaction = serviceHub.signInitialTransaction(txBuilder)
// DOCEND 29
// We can also sign the transaction using a different public key:
// DOCSTART 30
val otherIdentity: PartyAndCertificate = serviceHub.keyManagementService.freshKeyAndCert(ourIdentityAndCert, false)
val onceSignedTx2: SignedTransaction = serviceHub.signInitialTransaction(txBuilder, otherIdentity.owningKey)
// DOCEND 30
// If instead this was a ``SignedTransaction`` that we'd received
// from a counterparty and we needed to sign it, we would add our
// signature using:
// DOCSTART 38
val twiceSignedTx: SignedTransaction = serviceHub.addSignature(onceSignedTx)
// DOCEND 38
// Or, if we wanted to use a different public key:
val otherIdentity2: PartyAndCertificate = serviceHub.keyManagementService.freshKeyAndCert(ourIdentityAndCert, false)
// DOCSTART 39
val twiceSignedTx2: SignedTransaction = serviceHub.addSignature(onceSignedTx, otherIdentity2.owningKey)
// DOCEND 39
// We can also generate a signature over the transaction without
// adding it to the transaction itself. We may do this when
// sending just the signature in a flow instead of returning the
// entire transaction with our signature. This way, the receiving
// node does not need to check we haven't changed anything in the
// transaction.
// DOCSTART 40
val sig: TransactionSignature = serviceHub.createSignature(onceSignedTx)
// DOCEND 40
// And again, if we wanted to use a different public key:
// DOCSTART 41
val sig2: TransactionSignature = serviceHub.createSignature(onceSignedTx, otherIdentity2.owningKey)
// DOCEND 41
// In practice, however, the process of gathering every signature
// but the first can be automated using ``CollectSignaturesFlow``.
// See the "Gathering Signatures" section below.
/**---------------------------
* TRANSACTION VERIFICATION *
---------------------------**/
progressTracker.currentStep = TX_VERIFICATION
// Verifying a transaction will also verify every transaction in
// the transaction's dependency chain, which will require
// transaction data access on counterparty's node. The
// ``SendTransactionFlow`` can be used to automate the sending and
// data vending process. The ``SendTransactionFlow`` will listen
// for data request until the transaction is resolved and verified
// on the other side:
// DOCSTART 12
subFlow(SendTransactionFlow(counterpartySession, twiceSignedTx))
// Optional request verification to further restrict data access.
subFlow(object : SendTransactionFlow(counterpartySession, twiceSignedTx) {
override fun verifyDataRequest(dataRequest: FetchDataFlow.Request.Data) {
// Extra request verification.
}
})
// DOCEND 12
// We can receive the transaction using ``ReceiveTransactionFlow``,
// which will automatically download all the dependencies and verify
// the transaction
// DOCSTART 13
val verifiedTransaction = subFlow(ReceiveTransactionFlow(counterpartySession))
// DOCEND 13
// We can also send and receive a `StateAndRef` dependency chain
// and automatically resolve its dependencies.
// DOCSTART 14
subFlow(SendStateAndRefFlow(counterpartySession, dummyStates))
// On the receive side ...
val resolvedStateAndRef = subFlow(ReceiveStateAndRefFlow<DummyState>(counterpartySession))
// DOCEND 14
// We can now verify the transaction to ensure that it satisfies
// the contracts of all the transaction's input and output states.
// DOCSTART 33
twiceSignedTx.verify(serviceHub)
// DOCEND 33
// We'll often want to perform our own additional verification
// too. Just because a transaction is valid based on the contract
// rules and requires our signature doesn't mean we have to
// sign it! We need to make sure the transaction represents an
// agreement we actually want to enter into.
// To do this, we need to convert our ``SignedTransaction``
// into a ``LedgerTransaction``. This will use our ServiceHub
// to resolve the transaction's inputs and attachments into
// actual objects, rather than just references.
// DOCSTART 32
val ledgerTx: LedgerTransaction = twiceSignedTx.toLedgerTransaction(serviceHub)
// DOCEND 32
// We can now perform our additional verification.
// DOCSTART 34
val outputState: DummyState = ledgerTx.outputsOfType<DummyState>().single()
if (outputState.magicNumber == 777) {
// ``FlowException`` is a special exception type. It will be
// propagated back to any counterparty flows waiting for a
// message from this flow, notifying them that the flow has
// failed.
throw FlowException("We expected a magic number of 777.")
}
// DOCEND 34
// Of course, if you are not a required signer on the transaction,
// you have no power to decide whether it is valid or not. If it
// requires signatures from all the required signers and is
// contractually valid, it's a valid ledger update.
/**-----------------------
* GATHERING SIGNATURES *
-----------------------**/
progressTracker.currentStep = SIGS_GATHERING
// The list of parties who need to sign a transaction is dictated
// by the transaction's commands. Once we've signed a transaction
// ourselves, we can automatically gather the signatures of the
// other required signers using ``CollectSignaturesFlow``.
// The responder flow will need to call ``SignTransactionFlow``.
// DOCSTART 15
val fullySignedTx: SignedTransaction = subFlow(CollectSignaturesFlow(twiceSignedTx, setOf(counterpartySession, regulatorSession), SIGS_GATHERING.childProgressTracker()))
// DOCEND 15
/**-----------------------
* VERIFYING SIGNATURES *
-----------------------**/
progressTracker.currentStep = VERIFYING_SIGS
// We can verify that a transaction has all the required
// signatures, and that they're all valid, by running:
// DOCSTART 35
fullySignedTx.verifyRequiredSignatures()
// DOCEND 35
// If the transaction is only partially signed, we have to pass in
// a vararg of the public keys corresponding to the missing
// signatures, explicitly telling the system not to check them.
// DOCSTART 36
onceSignedTx.verifySignaturesExcept(counterpartyPubKey)
// DOCEND 36
// There is also an overload of ``verifySignaturesExcept`` which accepts
// a ``Collection`` of the public keys corresponding to the missing
// signatures.
// DOCSTART 54
onceSignedTx.verifySignaturesExcept(listOf(counterpartyPubKey))
// DOCEND 54
// We can also choose to only check the signatures that are
// present. BE VERY CAREFUL - this function provides no guarantees
// that the signatures are correct, or that none are missing.
// DOCSTART 37
twiceSignedTx.checkSignaturesAreValid()
// DOCEND 37
/**-----------------------------
* FINALISING THE TRANSACTION *
-----------------------------**/
progressTracker.currentStep = FINALISATION
// We notarise the transaction and get it recorded in the vault of
// the participants of all the transaction's states.
// DOCSTART 09
val notarisedTx1: SignedTransaction = subFlow(FinalityFlow(fullySignedTx, listOf(counterpartySession), FINALISATION.childProgressTracker()))
// DOCEND 09
// We can also choose to send it to additional parties who aren't one
// of the state's participants.
// DOCSTART 10
val partySessions: List<FlowSession> = listOf(counterpartySession, initiateFlow(regulator))
val notarisedTx2: SignedTransaction = subFlow(FinalityFlow(fullySignedTx, partySessions, FINALISATION.childProgressTracker()))
// DOCEND 10
// DOCSTART FlowSession porting
send(regulator, Any()) // Old API
// becomes
val session = initiateFlow(regulator)
session.send(Any())
// DOCEND FlowSession porting
}
}
// ``ResponderFlow`` is our second flow, and will communicate with
// ``InitiatorFlow``.
// We mark ``ResponderFlow`` as an ``InitiatedByFlow``, meaning that it
// can only be started in response to a message from its initiating flow.
// That's ``InitiatorFlow`` in this case.
// Each node also has several flow pairs registered by default - see
// ``AbstractNode.installCoreFlows``.
@InitiatedBy(InitiatorFlow::class)
class ResponderFlow(val counterpartySession: FlowSession) : FlowLogic<Unit>() {
companion object {
object RECEIVING_AND_SENDING_DATA : Step("Sending data between parties.")
object SIGNING : Step("Responding to CollectSignaturesFlow.")
object FINALISATION : Step("Finalising a transaction.")
fun tracker() = ProgressTracker(
RECEIVING_AND_SENDING_DATA,
SIGNING,
FINALISATION
)
}
override val progressTracker: ProgressTracker = tracker()
@Suspendable
override fun call() {
// The ``ResponderFlow` has all the same APIs available. It looks
// up network information, sends and receives data, and constructs
// transactions in exactly the same way.
/**-----------------------------
* SENDING AND RECEIVING DATA *
-----------------------------**/
progressTracker.currentStep = RECEIVING_AND_SENDING_DATA
// We need to respond to the messages sent by the initiator:
// 1. They sent us an ``Any`` instance
// 2. They waited to receive an ``Integer`` instance back
// 3. They sent a ``String`` instance and waited to receive a
// ``Boolean`` instance back
// Our side of the flow must mirror these calls.
// DOCSTART 08
val any: Any = counterpartySession.receive<Any>().unwrap { data -> data }
val string: String = counterpartySession.sendAndReceive<String>(99).unwrap { data -> data }
counterpartySession.send(true)
// DOCEND 08
/**----------------------------------------
* RESPONDING TO COLLECT_SIGNATURES_FLOW *
----------------------------------------**/
progressTracker.currentStep = SIGNING
// The responder will often need to respond to a call to
// ``CollectSignaturesFlow``. It does so my invoking its own
// ``SignTransactionFlow`` subclass.
// DOCSTART 16
val signTransactionFlow: SignTransactionFlow = object : SignTransactionFlow(counterpartySession) {
override fun checkTransaction(stx: SignedTransaction) = requireThat {
// Any additional checking we see fit...
val outputState = stx.tx.outputsOfType<DummyState>().single()
require(outputState.magicNumber == 777)
}
}
val idOfTxWeSigned = subFlow(signTransactionFlow).id
// DOCEND 16
/**-----------------------------
* FINALISING THE TRANSACTION *
-----------------------------**/
progressTracker.currentStep = FINALISATION
// As the final step the responder waits to receive the notarised transaction from the sending party
// Since it knows the ID of the transaction it just signed, the transaction ID is specified to ensure the correct
// transaction is received and recorded.
// DOCSTART ReceiveFinalityFlow
subFlow(ReceiveFinalityFlow(counterpartySession, expectedTxId = idOfTxWeSigned))
// DOCEND ReceiveFinalityFlow
}
}
package net.corda.docs.java;
import co.paralleluniverse.fibers.Suspendable;
import com.google.common.collect.ImmutableList;
import net.corda.core.contracts.*;
import net.corda.core.crypto.SecureHash;
import net.corda.core.crypto.TransactionSignature;
import net.corda.core.flows.*;
import net.corda.core.identity.CordaX500Name;
import net.corda.core.identity.Party;
import net.corda.core.identity.PartyAndCertificate;
import net.corda.core.internal.FetchDataFlow;
import net.corda.core.node.services.Vault;
import net.corda.core.node.services.Vault.Page;
import net.corda.core.node.services.vault.QueryCriteria.VaultQueryCriteria;
import net.corda.core.transactions.LedgerTransaction;
import net.corda.core.transactions.SignedTransaction;
import net.corda.core.transactions.TransactionBuilder;
import net.corda.core.utilities.ProgressTracker;
import net.corda.core.utilities.ProgressTracker.Step;
import net.corda.core.utilities.UntrustworthyData;
import net.corda.finance.contracts.asset.Cash;
import net.corda.testing.contracts.DummyContract;
import net.corda.testing.contracts.DummyState;
import org.jetbrains.annotations.NotNull;
import java.security.GeneralSecurityException;
import java.security.PublicKey;
import java.time.Duration;
import java.time.Instant;
import java.util.Arrays;
import java.util.List;
import static com.google.common.base.Preconditions.checkArgument;
import static java.util.Collections.*;
import static net.corda.core.contracts.ContractsDSL.requireThat;
import static net.corda.core.crypto.Crypto.generateKeyPair;
@SuppressWarnings("unused")
public class FlowCookbook {
// ``InitiatorFlow`` is our first flow, and will communicate with
// ``ResponderFlow``, below.
// We mark ``InitiatorFlow`` as an ``InitiatingFlow``, allowing it to be
// started directly by the node.
@InitiatingFlow
// We also mark ``InitiatorFlow`` as ``StartableByRPC``, allowing the
// node's owner to start the flow via RPC.
@StartableByRPC
// Every flow must subclass ``FlowLogic``. The generic indicates the
// flow's return type.
public static class InitiatorFlow extends FlowLogic<Void> {
private final boolean arg1;
private final int arg2;
private final Party counterparty;
private final Party regulator;
public InitiatorFlow(boolean arg1, int arg2, Party counterparty, Party regulator) {
this.arg1 = arg1;
this.arg2 = arg2;
this.counterparty = counterparty;
this.regulator = regulator;
}
/*----------------------------------
* WIRING UP THE PROGRESS TRACKER *
----------------------------------*/
// Giving our flow a progress tracker allows us to see the flow's
// progress visually in our node's CRaSH shell.
// DOCSTART 17
private static final Step ID_OTHER_NODES = new Step("Identifying other nodes on the network.");
private static final Step SENDING_AND_RECEIVING_DATA = new Step("Sending data between parties.");
private static final Step EXTRACTING_VAULT_STATES = new Step("Extracting states from the vault.");
private static final Step OTHER_TX_COMPONENTS = new Step("Gathering a transaction's other components.");
private static final Step TX_BUILDING = new Step("Building a transaction.");
private static final Step TX_SIGNING = new Step("Signing a transaction.");
private static final Step TX_VERIFICATION = new Step("Verifying a transaction.");
private static final Step SIGS_GATHERING = new Step("Gathering a transaction's signatures.") {
// Wiring up a child progress tracker allows us to see the
// subflow's progress steps in our flow's progress tracker.
@Override
public ProgressTracker childProgressTracker() {
return CollectSignaturesFlow.tracker();
}
};
private static final Step VERIFYING_SIGS = new Step("Verifying a transaction's signatures.");
private static final Step FINALISATION = new Step("Finalising a transaction.") {
@Override
public ProgressTracker childProgressTracker() {
return FinalityFlow.tracker();
}
};
private final ProgressTracker progressTracker = new ProgressTracker(
ID_OTHER_NODES,
SENDING_AND_RECEIVING_DATA,
EXTRACTING_VAULT_STATES,
OTHER_TX_COMPONENTS,
TX_BUILDING,
TX_SIGNING,
TX_VERIFICATION,
SIGS_GATHERING,
FINALISATION
);
// DOCEND 17
@Suspendable
@Override
public Void call() throws FlowException {
// We'll be using a dummy public key for demonstration purposes.
PublicKey dummyPubKey = generateKeyPair().getPublic();
/*---------------------------
* IDENTIFYING OTHER NODES *
---------------------------*/
// DOCSTART 18
progressTracker.setCurrentStep(ID_OTHER_NODES);
// DOCEND 18
// A transaction generally needs a notary:
// - To prevent double-spends if the transaction has inputs
// - To serve as a timestamping authority if the transaction has a
// time-window
// We retrieve a notary from the network map.
// DOCSTART 01
CordaX500Name notaryName = new CordaX500Name("Notary Service", "London", "GB");
Party specificNotary = getServiceHub().getNetworkMapCache().getNotary(notaryName);
// Alternatively, we can pick an arbitrary notary from the notary
// list. However, it is always preferable to specify the notary
// explicitly, as the notary list might change when new notaries are
// introduced, or old ones decommissioned.
Party firstNotary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
// DOCEND 01
// We may also need to identify a specific counterparty. We do so
// using the identity service.
// DOCSTART 02
CordaX500Name counterPartyName = new CordaX500Name("NodeA", "London", "GB");
Party namedCounterparty = getServiceHub().getIdentityService().wellKnownPartyFromX500Name(counterPartyName);
Party keyedCounterparty = getServiceHub().getIdentityService().partyFromKey(dummyPubKey);
// DOCEND 02
/*------------------------------
* SENDING AND RECEIVING DATA *
------------------------------*/
progressTracker.setCurrentStep(SENDING_AND_RECEIVING_DATA);
// We start by initiating a flow session with the counterparty. We
// will use this session to send and receive messages from the
// counterparty.
// DOCSTART initiateFlow
FlowSession counterpartySession = initiateFlow(counterparty);
// DOCEND initiateFlow
// We can send arbitrary data to a counterparty.
// If this is the first ``send``, the counterparty will either:
// 1. Ignore the message if they are not registered to respond
// to messages from this flow.
// 2. Start the flow they have registered to respond to this flow,
// and run the flow until the first call to ``receive``, at
// which point they process the message.
// In other words, we are assuming that the counterparty is
// registered to respond to this flow, and has a corresponding
// ``receive`` call.
// DOCSTART 04
counterpartySession.send(new Object());
// DOCEND 04
// We can wait to receive arbitrary data of a specific type from a
// counterparty. Again, this implies a corresponding ``send`` call
// in the counterparty's flow. A few scenarios:
// - We never receive a message back. In the current design, the
// flow is paused until the node's owner kills the flow.
// - Instead of sending a message back, the counterparty throws a
// ``FlowException``. This exception is propagated back to us,
// and we can use the error message to establish what happened.
// - We receive a message back, but it's of the wrong type. In
// this case, a ``FlowException`` is thrown.
// - We receive back a message of the correct type. All is good.
//
// Upon calling ``receive()`` (or ``sendAndReceive()``), the
// ``FlowLogic`` is suspended until it receives a response.
//
// We receive the data wrapped in an ``UntrustworthyData``
// instance. This is a reminder that the data we receive may not
// be what it appears to be! We must unwrap the
// ``UntrustworthyData`` using a lambda.
// DOCSTART 05
UntrustworthyData<Integer> packet1 = counterpartySession.receive(Integer.class);
Integer integer = packet1.unwrap(data -> {
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
return data;
});
// DOCEND 05
// We can also use a single call to send data to a counterparty
// and wait to receive data of a specific type back. The type of
// data sent doesn't need to match the type of the data received
// back.
// DOCSTART 07
UntrustworthyData<Boolean> packet2 = counterpartySession.sendAndReceive(Boolean.class, "You can send and receive any class!");
Boolean bool = packet2.unwrap(data -> {
// Perform checking on the object received.
// T O D O: Check the received object.
// Return the object.
return data;
});
// DOCEND 07
// We're not limited to sending to and receiving from a single
// counterparty. A flow can send messages to as many parties as it
// likes, and each party can invoke a different response flow.
// DOCSTART 06
FlowSession regulatorSession = initiateFlow(regulator);
regulatorSession.send(new Object());
UntrustworthyData<Object> packet3 = regulatorSession.receive(Object.class);
// DOCEND 06
/*------------------------------------
* EXTRACTING STATES FROM THE VAULT *
------------------------------------*/
progressTracker.setCurrentStep(EXTRACTING_VAULT_STATES);
// Let's assume there are already some ``DummyState``s in our
// node's vault, stored there as a result of running past flows,
// and we want to consume them in a transaction. There are many
// ways to extract these states from our vault.
// For example, we would extract any unconsumed ``DummyState``s
// from our vault as follows:
VaultQueryCriteria criteria = new VaultQueryCriteria(Vault.StateStatus.UNCONSUMED);
Page<DummyState> results = getServiceHub().getVaultService().queryBy(DummyState.class, criteria);
List<StateAndRef<DummyState>> dummyStates = results.getStates();
// For a full list of the available ways of extracting states from
// the vault, see the Vault Query docs page.
// When building a transaction, input states are passed in as
// ``StateRef`` instances, which pair the hash of the transaction
// that generated the state with the state's index in the outputs
// of that transaction. In practice, we'd pass the transaction hash
// or the ``StateRef`` as a parameter to the flow, or extract the
// ``StateRef`` from our vault.
// DOCSTART 20
StateRef ourStateRef = new StateRef(SecureHash.sha256("DummyTransactionHash"), 0);
// DOCEND 20
// A ``StateAndRef`` pairs a ``StateRef`` with the state it points to.
// DOCSTART 21
StateAndRef ourStateAndRef = getServiceHub().toStateAndRef(ourStateRef);
// DOCEND 21
/*------------------------------------------
* GATHERING OTHER TRANSACTION COMPONENTS *
------------------------------------------*/
progressTracker.setCurrentStep(OTHER_TX_COMPONENTS);
// Reference input states are constructed from StateAndRefs.
// DOCSTART 55
ReferencedStateAndRef referenceState = ourStateAndRef.referenced();
// DOCEND 55
// Output states are constructed from scratch.
// DOCSTART 22
DummyState ourOutputState = new DummyState();
// DOCEND 22
// Or as copies of other states with some properties changed.
// DOCSTART 23
DummyState ourOtherOutputState = ourOutputState.copy(77);
// DOCEND 23
// We then need to pair our output state with a contract.
// DOCSTART 47
StateAndContract ourOutput = new StateAndContract(ourOutputState, DummyContract.PROGRAM_ID);
// DOCEND 47
// Commands pair a ``CommandData`` instance with a list of
// public keys. To be valid, the transaction requires a signature
// matching every public key in all of the transaction's commands.
// DOCSTART 24
DummyContract.Commands.Create commandData = new DummyContract.Commands.Create();
PublicKey ourPubKey = getServiceHub().getMyInfo().getLegalIdentitiesAndCerts().get(0).getOwningKey();
PublicKey counterpartyPubKey = counterparty.getOwningKey();
List<PublicKey> requiredSigners = ImmutableList.of(ourPubKey, counterpartyPubKey);
Command<DummyContract.Commands.Create> ourCommand = new Command<>(commandData, requiredSigners);
// DOCEND 24
// ``CommandData`` can either be:
// 1. Of type ``TypeOnlyCommandData``, in which case it only
// serves to attach signers to the transaction and possibly
// fork the contract's verification logic.
TypeOnlyCommandData typeOnlyCommandData = new DummyContract.Commands.Create();
// 2. Include additional data which can be used by the contract
// during verification, alongside fulfilling the roles above
CommandData commandDataWithData = new Cash.Commands.Issue();
// Attachments are identified by their hash.
// The attachment with the corresponding hash must have been
// uploaded ahead of time via the node's RPC interface.
// DOCSTART 25
SecureHash ourAttachment = SecureHash.sha256("DummyAttachment");
// DOCEND 25
// Time windows represent the period of time during which a
// transaction must be notarised. They can have a start and an end
// time, or be open at either end.
// DOCSTART 26
TimeWindow ourTimeWindow = TimeWindow.between(Instant.MIN, Instant.MAX);
TimeWindow ourAfter = TimeWindow.fromOnly(Instant.MIN);
TimeWindow ourBefore = TimeWindow.untilOnly(Instant.MAX);
// DOCEND 26
// We can also define a time window as an ``Instant`` +/- a time
// tolerance (e.g. 30 seconds):
// DOCSTART 42
TimeWindow ourTimeWindow2 = TimeWindow.withTolerance(getServiceHub().getClock().instant(), Duration.ofSeconds(30));
// DOCEND 42
// Or as a start-time plus a duration:
// DOCSTART 43
TimeWindow ourTimeWindow3 = TimeWindow.fromStartAndDuration(getServiceHub().getClock().instant(), Duration.ofSeconds(30));
// DOCEND 43
/*------------------------
* TRANSACTION BUILDING *
------------------------*/
progressTracker.setCurrentStep(TX_BUILDING);
// If our transaction has input states or a time-window, we must instantiate it with a
// notary.
// DOCSTART 19
TransactionBuilder txBuilder = new TransactionBuilder(specificNotary);
// DOCEND 19
// Otherwise, we can choose to instantiate it without one:
// DOCSTART 46
TransactionBuilder txBuilderNoNotary = new TransactionBuilder();
// DOCEND 46
// We add items to the transaction builder using ``TransactionBuilder.withItems``:
// DOCSTART 27
txBuilder.withItems(
// Inputs, as ``StateAndRef``s that reference to the outputs of previous transactions
ourStateAndRef,
// Outputs, as ``StateAndContract``s
ourOutput,
// Commands, as ``Command``s
ourCommand,
// Attachments, as ``SecureHash``es
ourAttachment,
// A time-window, as ``TimeWindow``
ourTimeWindow
);
// DOCEND 27
// We can also add items using methods for the individual components.
// The individual methods for adding input states and attachments:
// DOCSTART 28
txBuilder.addInputState(ourStateAndRef);
txBuilder.addAttachment(ourAttachment);
// DOCEND 28
// An output state can be added as a ``ContractState``, contract class name and notary.
// DOCSTART 49
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary);
// DOCEND 49
// We can also leave the notary field blank, in which case the transaction's default
// notary is used.
// DOCSTART 50
txBuilder.addOutputState(ourOutputState, DummyContract.PROGRAM_ID);
// DOCEND 50
// Or we can add the output state as a ``TransactionState``, which already specifies
// the output's contract and notary.
// DOCSTART 51
TransactionState txState = new TransactionState(ourOutputState, DummyContract.PROGRAM_ID, specificNotary);
// DOCEND 51
// Commands can be added as ``Command``s.
// DOCSTART 52
txBuilder.addCommand(ourCommand);
// DOCEND 52
// Or as ``CommandData`` and a ``vararg PublicKey``.
// DOCSTART 53
txBuilder.addCommand(commandData, ourPubKey, counterpartyPubKey);
// DOCEND 53
// We can set a time-window directly.
// DOCSTART 44
txBuilder.setTimeWindow(ourTimeWindow);
// DOCEND 44
// Or as a start time plus a duration (e.g. 45 seconds).
// DOCSTART 45
txBuilder.setTimeWindow(getServiceHub().getClock().instant(), Duration.ofSeconds(45));
// DOCEND 45
/*-----------------------
* TRANSACTION SIGNING *
-----------------------*/
progressTracker.setCurrentStep(TX_SIGNING);
// We finalise the transaction by signing it,
// converting it into a ``SignedTransaction``.
// DOCSTART 29
SignedTransaction onceSignedTx = getServiceHub().signInitialTransaction(txBuilder);
// DOCEND 29
// We can also sign the transaction using a different public key:
// DOCSTART 30
PartyAndCertificate otherIdentity = getServiceHub().getKeyManagementService().freshKeyAndCert(getOurIdentityAndCert(), false);
SignedTransaction onceSignedTx2 = getServiceHub().signInitialTransaction(txBuilder, otherIdentity.getOwningKey());
// DOCEND 30
// If instead this was a ``SignedTransaction`` that we'd received
// from a counterparty and we needed to sign it, we would add our
// signature using:
// DOCSTART 38
SignedTransaction twiceSignedTx = getServiceHub().addSignature(onceSignedTx);
// DOCEND 38
// Or, if we wanted to use a different public key:
PartyAndCertificate otherIdentity2 = getServiceHub().getKeyManagementService().freshKeyAndCert(getOurIdentityAndCert(), false);
// DOCSTART 39
SignedTransaction twiceSignedTx2 = getServiceHub().addSignature(onceSignedTx, otherIdentity2.getOwningKey());
// DOCEND 39
// We can also generate a signature over the transaction without
// adding it to the transaction itself. We may do this when
// sending just the signature in a flow instead of returning the
// entire transaction with our signature. This way, the receiving
// node does not need to check we haven't changed anything in the
// transaction.
// DOCSTART 40
TransactionSignature sig = getServiceHub().createSignature(onceSignedTx);
// DOCEND 40
// And again, if we wanted to use a different public key:
// DOCSTART 41
TransactionSignature sig2 = getServiceHub().createSignature(onceSignedTx, otherIdentity2.getOwningKey());
// DOCEND 41
/*----------------------------
* TRANSACTION VERIFICATION *
----------------------------*/
progressTracker.setCurrentStep(TX_VERIFICATION);
// Verifying a transaction will also verify every transaction in
// the transaction's dependency chain, which will require
// transaction data access on counterparty's node. The
// ``SendTransactionFlow`` can be used to automate the sending and
// data vending process. The ``SendTransactionFlow`` will listen
// for data request until the transaction is resolved and verified
// on the other side:
// DOCSTART 12
subFlow(new SendTransactionFlow(counterpartySession, twiceSignedTx));
// Optional request verification to further restrict data access.
subFlow(new SendTransactionFlow(counterpartySession, twiceSignedTx) {
@Override
protected void verifyDataRequest(@NotNull FetchDataFlow.Request.Data dataRequest) {
// Extra request verification.
}
});
// DOCEND 12
// We can receive the transaction using ``ReceiveTransactionFlow``,
// which will automatically download all the dependencies and verify
// the transaction and then record in our vault
// DOCSTART 13
SignedTransaction verifiedTransaction = subFlow(new ReceiveTransactionFlow(counterpartySession));
// DOCEND 13
// We can also send and receive a `StateAndRef` dependency chain and automatically resolve its dependencies.
// DOCSTART 14
subFlow(new SendStateAndRefFlow(counterpartySession, dummyStates));
// On the receive side ...
List<StateAndRef<DummyState>> resolvedStateAndRef = subFlow(new ReceiveStateAndRefFlow<>(counterpartySession));
// DOCEND 14
try {
// We can now verify the transaction to ensure that it satisfies
// the contracts of all the transaction's input and output states.
// DOCSTART 33
twiceSignedTx.verify(getServiceHub());
// DOCEND 33
// We'll often want to perform our own additional verification
// too. Just because a transaction is valid based on the contract
// rules and requires our signature doesn't mean we have to
// sign it! We need to make sure the transaction represents an
// agreement we actually want to enter into.
// To do this, we need to convert our ``SignedTransaction``
// into a ``LedgerTransaction``. This will use our ServiceHub
// to resolve the transaction's inputs and attachments into
// actual objects, rather than just references.
// DOCSTART 32
LedgerTransaction ledgerTx = twiceSignedTx.toLedgerTransaction(getServiceHub());
// DOCEND 32
// We can now perform our additional verification.
// DOCSTART 34
DummyState outputState = ledgerTx.outputsOfType(DummyState.class).get(0);
if (outputState.getMagicNumber() != 777) {
// ``FlowException`` is a special exception type. It will be
// propagated back to any counterparty flows waiting for a
// message from this flow, notifying them that the flow has
// failed.
throw new FlowException("We expected a magic number of 777.");
}
// DOCEND 34
} catch (GeneralSecurityException e) {
// Handle this as required.
}
// Of course, if you are not a required signer on the transaction,
// you have no power to decide whether it is valid or not. If it
// requires signatures from all the required signers and is
// contractually valid, it's a valid ledger update.
/*------------------------
* GATHERING SIGNATURES *
------------------------*/
progressTracker.setCurrentStep(SIGS_GATHERING);
// The list of parties who need to sign a transaction is dictated
// by the transaction's commands. Once we've signed a transaction
// ourselves, we can automatically gather the signatures of the
// other required signers using ``CollectSignaturesFlow``.
// The responder flow will need to call ``SignTransactionFlow``.
// DOCSTART 15
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(twiceSignedTx, emptySet(), SIGS_GATHERING.childProgressTracker()));
// DOCEND 15
/*------------------------
* VERIFYING SIGNATURES *
------------------------*/
progressTracker.setCurrentStep(VERIFYING_SIGS);
try {
// We can verify that a transaction has all the required
// signatures, and that they're all valid, by running:
// DOCSTART 35
fullySignedTx.verifyRequiredSignatures();
// DOCEND 35
// If the transaction is only partially signed, we have to pass in
// a vararg of the public keys corresponding to the missing
// signatures, explicitly telling the system not to check them.
// DOCSTART 36
onceSignedTx.verifySignaturesExcept(counterpartyPubKey);
// DOCEND 36
// There is also an overload of ``verifySignaturesExcept`` which accepts
// a ``Collection`` of the public keys corresponding to the missing
// signatures. In the example below, we could also use
// ``Arrays.asList(counterpartyPubKey)`` instead of
// ``Collections.singletonList(counterpartyPubKey)``.
// DOCSTART 54
onceSignedTx.verifySignaturesExcept(singletonList(counterpartyPubKey));
// DOCEND 54
// We can also choose to only check the signatures that are
// present. BE VERY CAREFUL - this function provides no guarantees
// that the signatures are correct, or that none are missing.
// DOCSTART 37
twiceSignedTx.checkSignaturesAreValid();
// DOCEND 37
} catch (GeneralSecurityException e) {
// Handle this as required.
}
/*------------------------------
* FINALISING THE TRANSACTION *
------------------------------*/
progressTracker.setCurrentStep(FINALISATION);
// We notarise the transaction and get it recorded in the vault of
// the participants of all the transaction's states.
// DOCSTART 09
SignedTransaction notarisedTx1 = subFlow(new FinalityFlow(fullySignedTx, singleton(counterpartySession), FINALISATION.childProgressTracker()));
// DOCEND 09
// We can also choose to send it to additional parties who aren't one
// of the state's participants.
// DOCSTART 10
List<FlowSession> partySessions = Arrays.asList(counterpartySession, initiateFlow(regulator));
SignedTransaction notarisedTx2 = subFlow(new FinalityFlow(fullySignedTx, partySessions, FINALISATION.childProgressTracker()));
// DOCEND 10
// DOCSTART FlowSession porting
send(regulator, new Object()); // Old API
// becomes
FlowSession session = initiateFlow(regulator);
session.send(new Object());
// DOCEND FlowSession porting
return null;
}
}
// ``ResponderFlow`` is our second flow, and will communicate with
// ``InitiatorFlow``.
// We mark ``ResponderFlow`` as an ``InitiatedByFlow``, meaning that it
// can only be started in response to a message from its initiating flow.
// That's ``InitiatorFlow`` in this case.
// Each node also has several flow pairs registered by default - see
// ``AbstractNode.installCoreFlows``.
@InitiatedBy(InitiatorFlow.class)
public static class ResponderFlow extends FlowLogic<Void> {
private final FlowSession counterpartySession;
public ResponderFlow(FlowSession counterpartySession) {
this.counterpartySession = counterpartySession;
}
private static final Step RECEIVING_AND_SENDING_DATA = new Step("Sending data between parties.");
private static final Step SIGNING = new Step("Responding to CollectSignaturesFlow.");
private static final Step FINALISATION = new Step("Finalising a transaction.");
private final ProgressTracker progressTracker = new ProgressTracker(
RECEIVING_AND_SENDING_DATA,
SIGNING,
FINALISATION
);
@Suspendable
@Override
public Void call() throws FlowException {
// The ``ResponderFlow` has all the same APIs available. It looks
// up network information, sends and receives data, and constructs
// transactions in exactly the same way.
/*------------------------------
* SENDING AND RECEIVING DATA *
-----------------------------*/
progressTracker.setCurrentStep(RECEIVING_AND_SENDING_DATA);
// We need to respond to the messages sent by the initiator:
// 1. They sent us an ``Object`` instance
// 2. They waited to receive an ``Integer`` instance back
// 3. They sent a ``String`` instance and waited to receive a
// ``Boolean`` instance back
// Our side of the flow must mirror these calls.
// DOCSTART 08
Object obj = counterpartySession.receive(Object.class).unwrap(data -> data);
String string = counterpartySession.sendAndReceive(String.class, 99).unwrap(data -> data);
counterpartySession.send(true);
// DOCEND 08
/*-----------------------------------------
* RESPONDING TO COLLECT_SIGNATURES_FLOW *
-----------------------------------------*/
progressTracker.setCurrentStep(SIGNING);
// The responder will often need to respond to a call to
// ``CollectSignaturesFlow``. It does so my invoking its own
// ``SignTransactionFlow`` subclass.
// DOCSTART 16
class SignTxFlow extends SignTransactionFlow {
private SignTxFlow(FlowSession otherSession, ProgressTracker progressTracker) {
super(otherSession, progressTracker);
}
@Override
protected void checkTransaction(SignedTransaction stx) {
requireThat(require -> {
// Any additional checking we see fit...
DummyState outputState = (DummyState) stx.getTx().getOutputs().get(0).getData();
checkArgument(outputState.getMagicNumber() == 777);
return null;
});
}
}
SecureHash idOfTxWeSigned = subFlow(new SignTxFlow(counterpartySession, SignTransactionFlow.tracker())).getId();
// DOCEND 16
/*------------------------------
* FINALISING THE TRANSACTION *
------------------------------*/
progressTracker.setCurrentStep(FINALISATION);
// As the final step the responder waits to receive the notarised transaction from the sending party
// Since it knows the ID of the transaction it just signed, the transaction ID is specified to ensure the correct
// transaction is received and recorded.
// DOCSTART ReceiveFinalityFlow
subFlow(new ReceiveFinalityFlow(counterpartySession, idOfTxWeSigned));
// DOCEND ReceiveFinalityFlow
return null;
}
}
}
教程¶
This section is split into two parts.
这部分会分为两部分。
The Hello, World tutorials should be followed in sequence and show how to extend the Java or Kotlin CorDapp Template into a full CorDapp.
Hello, World 教程应该按照顺序来阅读,演示了如何扩展 Java 或者 Kotlin CorDapp 模板成为一个完整的 CorDapp。
Hello, World!¶
The CorDapp Template¶
When writing a new CorDapp, you’ll generally want to start from one of the standard templates:
The Cordapp templates provide the boilerplate for developing a new CorDapp. CorDapps can be written in either Java or Kotlin. We will be providing the code in both languages throughout this tutorial.
Note that there’s no need to download and install Corda itself. The required libraries are automatically downloaded from an online Maven repository and cached locally.
Downloading the template¶
Open a terminal window in the directory where you want to download the CorDapp template, and run the following command:
git clone https://github.com/corda/cordapp-template-java.git ; cd cordapp-template-java
git clone https://github.com/corda/cordapp-template-kotlin.git ; cd cordapp-template-kotlin
Opening the template in IntelliJ¶
Once the template is download, open it in IntelliJ by following the instructions here: https://docs.corda.net/tutorial-cordapp.html#opening-the-example-cordapp-in-intellij.
Template structure¶
For this tutorial, we will only be modifying the following files:
// 1. The state
contracts/src/main/java/com/template/states/TemplateState.java
// 2. The flow
workflows/src/main/java/com/template/flows/Initiator.java
// 1. The state
contracts/src/main/kotlin/com/template/states/TemplateState.kt
// 2. The flow
workflows/src/main/kotlin/com/template/flows/Flows.kt
Progress so far¶
We now have a template that we can build upon to define our IOU CorDapp. Let’s start by defining the IOUState
.
Writing the state¶
In Corda, shared facts on the blockchain are represented as states. Our first task will be to define a new state type to represent an IOU.
The ContractState interface¶
A Corda state is any instance of a class that implements the ContractState
interface. The ContractState
interface is defined as follows:
interface ContractState {
// The list of entities considered to have a stake in this state.
val participants: List<AbstractParty>
}
We can see that the ContractState
interface has a single field, participants
. participants
is a list of the
entities for which this state is relevant.
Beyond this, our state is free to define any fields, methods, helpers or inner classes it requires to accurately represent a given type of shared fact on the blockchain.
注解
The first thing you’ll probably notice about the declaration of ContractState
is that its not written in Java
or another common language. The core Corda platform, including the interface declaration above, is entirely written
in Kotlin.
Learning some Kotlin will be very useful for understanding how Corda works internally, and usually only takes an experienced Java developer a day or so to pick up. However, learning Kotlin isn’t essential. Because Kotlin code compiles to JVM bytecode, CorDapps written in other JVM languages such as Java can interoperate with Corda.
If you do want to dive into Kotlin, there’s an official getting started guide, and a series of Kotlin Koans.
Modelling IOUs¶
How should we define the IOUState
representing IOUs on the blockchain? Beyond implementing the ContractState
interface, our IOUState
will also need properties to track the relevant features of the IOU:
- The value of the IOU
- The lender of the IOU
- The borrower of the IOU
There are many more fields you could include, such as the IOU’s currency, but let’s ignore those for now. Adding them later is often as simple as adding an additional property to your class definition.
Defining IOUState¶
Let’s get started by opening TemplateState.java
(for Java) or StatesAndContracts.kt
(for Kotlin) and updating
TemplateState
to define an IOUState
:
// Add this import:
import net.corda.core.identity.Party
// Replace TemplateState's definition with:
class IOUState(val value: Int,
val lender: Party,
val borrower: Party) : ContractState {
override val participants get() = listOf(lender, borrower)
}
// Add this import:
import net.corda.core.identity.Party;
// Replace TemplateState's definition with:
public class IOUState implements ContractState {
private final int value;
private final Party lender;
private final Party borrower;
public IOUState(int value, Party lender, Party borrower) {
this.value = value;
this.lender = lender;
this.borrower = borrower;
}
public int getValue() {
return value;
}
public Party getLender() {
return lender;
}
public Party getBorrower() {
return borrower;
}
@Override
public List<AbstractParty> getParticipants() {
return Arrays.asList(lender, borrower);
}
}
If you’re following along in Java, you’ll also need to rename TemplateState.java
to IOUState.java
.
To define IOUState
, we’ve made the following changes:
- We’ve renamed the
TemplateState
class toIOUState
- We’ve added properties for
value
,lender
andborrower
, along with the required getters and setters in Java:value
is of typeint
(in Java)/Int
(in Kotlin)lender
andborrower
are of typeParty
Party
is a built-in Corda type that represents an entity on the network
- We’ve overridden
participants
to return a list of thelender
andborrower
participants
is a list of all the parties who should be notified of the creation or consumption of this state
The IOUs that we issue onto a ledger will simply be instances of this class.
Progress so far¶
We’ve defined an IOUState
that can be used to represent IOUs as shared facts on a ledger. As we’ve seen, states in
Corda are simply classes that implement the ContractState
interface. They can have any additional properties and
methods you like.
All that’s left to do is write the IOUFlow
that will allow a node to orchestrate the creation of a new IOUState
on the blockchain, while only sharing information on a need-to-know basis.
What about the contract?¶
If you’ve read the white paper or Key Concepts section, you’ll know that each state has an associated contract that
imposes invariants on how the state evolves over time. Including a contract isn’t crucial for our first CorDapp, so
we’ll just use the empty TemplateContract
and TemplateContract.Commands.Action
command defined by the template
for now. In the next tutorial, we’ll implement our own contract and command.
Writing the flow¶
A flow encodes a sequence of steps that a node can perform to achieve a specific ledger update. By installing new flows
on a node, we allow the node to handle new business processes. The flow we define will allow a node to issue an
IOUState
onto the ledger.
Flow outline¶
The goal of our flow will be to orchestrate an IOU issuance transaction. Transactions in Corda are the atomic units of change that update the ledger. Each transaction is a proposal to mark zero or more existing states as historic (the inputs), while creating zero or more new states (the outputs).
The process of creating and applying this transaction to a ledger will be conducted by the IOU’s lender, and will require the following steps:
- Building the transaction proposal for the issuance of a new IOU onto a ledger
- Signing the transaction proposal
- Recording the transaction and sending it to the IOU’s borrower so that they can record it too
We also need the borrower to receive the transaction and record it for itself. At this stage, we do not require the borrower to approve and sign IOU issuance transactions. We will be able to impose this requirement when we look at contracts in the next tutorial.
警告
The execution of a flow is distributed in space and time, as the flow crosses node boundaries and each participant may have to wait for other participants to respond before it can complete its part of the overall work. While a node is waiting, the state of its flow may be persistently recorded to disk as a restorable checkpoint, enabling it to carry on where it left off when a counterparty responds. However, before a node can be upgraded to a newer version of Corda, or of your Cordapp, all flows must have completed, as there is no mechanism to upgrade a persisted flow checkpoint. It is therefore undesirable to model a long-running business process as a single flow: it should rather be broken up into a series of transactions, with flows used only to orchestrate the completion of each transaction.
Subflows¶
Tasks like recording a transaction or sending a transaction to a counterparty are very common in Corda. Instead of forcing each developer to reimplement their own logic to handle these tasks, Corda provides a number of library flows to handle these tasks. We call these flows that are invoked in the context of a larger flow to handle a repeatable task subflows.
FlowLogic¶
All flows must subclass FlowLogic
. You then define the steps taken by the flow by overriding FlowLogic.call
.
Let’s define our IOUFlow
. Replace the definition of Initiator
with the following:
// Add these imports:
import net.corda.core.contracts.Command
import net.corda.core.identity.Party
import net.corda.core.transactions.TransactionBuilder
// Replace Initiator's definition with:
@InitiatingFlow
@StartableByRPC
class IOUFlow(val iouValue: Int,
val otherParty: Party) : FlowLogic<Unit>() {
/** The progress tracker provides checkpoints indicating the progress of the flow to observers. */
override val progressTracker = ProgressTracker()
/** The flow logic is encapsulated within the call() method. */
@Suspendable
override fun call() {
// We retrieve the notary identity from the network map.
val notary = serviceHub.networkMapCache.notaryIdentities[0]
// We create the transaction components.
val outputState = IOUState(iouValue, ourIdentity, otherParty)
val command = Command(TemplateContract.Commands.Action(), ourIdentity.owningKey)
// We create a transaction builder and add the components.
val txBuilder = TransactionBuilder(notary = notary)
.addOutputState(outputState, TemplateContract.ID)
.addCommand(command)
// We sign the transaction.
val signedTx = serviceHub.signInitialTransaction(txBuilder)
// Creating a session with the other party.
val otherPartySession = initiateFlow(otherParty)
// We finalise the transaction and then send it to the counterparty.
subFlow(FinalityFlow(signedTx, otherPartySession))
}
}
// Add these imports:
import net.corda.core.contracts.Command;
import net.corda.core.identity.Party;
import net.corda.core.transactions.SignedTransaction;
import net.corda.core.transactions.TransactionBuilder;
// Replace Initiator's definition with:
@InitiatingFlow
@StartableByRPC
public class IOUFlow extends FlowLogic<Void> {
private final Integer iouValue;
private final Party otherParty;
/**
* The progress tracker provides checkpoints indicating the progress of the flow to observers.
*/
private final ProgressTracker progressTracker = new ProgressTracker();
public IOUFlow(Integer iouValue, Party otherParty) {
this.iouValue = iouValue;
this.otherParty = otherParty;
}
@Override
public ProgressTracker getProgressTracker() {
return progressTracker;
}
/**
* The flow logic is encapsulated within the call() method.
*/
@Suspendable
@Override
public Void call() throws FlowException {
// We retrieve the notary identity from the network map.
Party notary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
// We create the transaction components.
IOUState outputState = new IOUState(iouValue, getOurIdentity(), otherParty);
Command command = new Command<>(new TemplateContract.Commands.Action(), getOurIdentity().getOwningKey());
// We create a transaction builder and add the components.
TransactionBuilder txBuilder = new TransactionBuilder(notary)
.addOutputState(outputState, TemplateContract.ID)
.addCommand(command);
// Signing the transaction.
SignedTransaction signedTx = getServiceHub().signInitialTransaction(txBuilder);
// Creating a session with the other party.
FlowSession otherPartySession = initiateFlow(otherParty);
// We finalise the transaction and then send it to the counterparty.
subFlow(new FinalityFlow(signedTx, otherPartySession));
return null;
}
}
If you’re following along in Java, you’ll also need to rename Initiator.java
to IOUFlow.java
.
Let’s walk through this code step-by-step.
We’ve defined our own FlowLogic
subclass that overrides FlowLogic.call
. FlowLogic.call
has a return type
that must match the type parameter passed to FlowLogic
- this is type returned by running the flow.
FlowLogic
subclasses can optionally have constructor parameters, which can be used as arguments to
FlowLogic.call
. In our case, we have two:
iouValue
, which is the value of the IOU being issuedotherParty
, the IOU’s borrower (the node running the flow is the lender)
FlowLogic.call
is annotated @Suspendable
- this allows the flow to be check-pointed and serialised to disk when
it encounters a long-running operation, allowing your node to move on to running other flows. Forgetting this
annotation out will lead to some very weird error messages!
There are also a few more annotations, on the FlowLogic
subclass itself:
@InitiatingFlow
means that this flow is part of a flow pair and that it triggers the other side to run the the counterpart flow (which in our case is theIOUFlowResponder
defined below).@StartableByRPC
allows the node owner to start this flow via an RPC call
Let’s walk through the steps of FlowLogic.call
itself. This is where we actually describe the procedure for
issuing the IOUState
onto a ledger.
Choosing a notary¶
Every transaction requires a notary to prevent double-spends and serve as a timestamping authority. The first thing we
do in our flow is retrieve the a notary from the node’s ServiceHub
. ServiceHub.networkMapCache
provides
information about the other nodes on the network and the services that they offer.
注解
Whenever we need information within a flow - whether it’s about our own node’s identity, the node’s local storage,
or the rest of the network - we generally obtain it via the node’s ServiceHub
.
Building the transaction¶
We’ll build our transaction proposal in two steps:
- Creating the transaction’s components
- Adding these components to a transaction builder
Our transaction will have the following structure:
- The output
IOUState
on the right represents the state we will be adding to the ledger. As you can see, there are no inputs - we are not consuming any existing ledger states in the creation of our IOU - An
Action
command listing the IOU’s lender as a signer
We’ve already talked about the IOUState
, but we haven’t looked at commands yet. Commands serve two functions:
- They indicate the intent of a transaction - issuance, transfer, redemption, revocation. This will be crucial when we discuss contracts in the next tutorial
- They allow us to define the required signers for the transaction. For example, IOU creation might require signatures from the lender only, whereas the transfer of an IOU might require signatures from both the IOU’s borrower and lender
Each Command
contains a command type plus a list of public keys. For now, we use the pre-defined
TemplateContract.Action
as our command type, and we list the lender as the only public key. This means that for
the transaction to be valid, the lender is required to sign the transaction.
To actually build the proposed transaction, we need a TransactionBuilder
. This is a mutable transaction class to
which we can add inputs, outputs, commands, and any other items the transaction needs. We create a
TransactionBuilder
that uses the notary we retrieved earlier.
Once we have the TransactionBuilder
, we add our components:
- The command is added directly using
TransactionBuilder.addCommand
- The output
IOUState
is added usingTransactionBuilder.addOutputState
. As well as the output state itself, this method takes a reference to the contract that will govern the evolution of the state over time. Here, we are passing in a reference to theTemplateContract
, which imposes no constraints. We will define a contract imposing real constraints in the next tutorial
Signing the transaction¶
Now that we have a valid transaction proposal, we need to sign it. Once the transaction is signed, no-one will be able to modify the transaction without invalidating this signature. This effectively makes the transaction immutable.
We sign the transaction using ServiceHub.signInitialTransaction
, which returns a SignedTransaction
. A
SignedTransaction
is an object that pairs a transaction with a list of signatures over that transaction.
Finalising the transaction¶
We now have a valid signed transaction. All that’s left to do is to get the notary to sign it, have that recorded
locally and then send it to all the relevant parties. Once that happens the transaction will become a permanent part of the
ledger. We use FinalityFlow
which does all of this for the lender.
For the borrower to receive the transaction they just need a flow that responds to the seller’s.
Creating the borrower’s flow¶
The borrower has to use ReceiveFinalityFlow
in order to receive and record the transaction; it needs to respond to
the lender’s flow. Let’s do that by replacing Responder
from the template with the following:
// Replace Responder's definition with:
@InitiatedBy(IOUFlow::class)
class IOUFlowResponder(private val otherPartySession: FlowSession) : FlowLogic<Unit>() {
@Suspendable
override fun call() {
subFlow(ReceiveFinalityFlow(otherPartySession))
}
}
// Replace Responder's definition with:
@InitiatedBy(IOUFlow.class)
public class IOUFlowResponder extends FlowLogic<Void> {
private final FlowSession otherPartySession;
public IOUFlowResponder(FlowSession otherPartySession) {
this.otherPartySession = otherPartySession;
}
@Suspendable
@Override
public Void call() throws FlowException {
subFlow(new ReceiveFinalityFlow(otherPartySession));
return null;
}
}
As with the IOUFlow
, our IOUFlowResponder
flow is a FlowLogic
subclass where we’ve overridden FlowLogic.call
.
The flow is annotated with InitiatedBy(IOUFlow.class)
, which means that your node will invoke
IOUFlowResponder.call
when it receives a message from a instance of Initiator
running on another node. This message
will be the finalised transaction which will be recorded in the borrower’s vault.
Progress so far¶
Our flow, and our CorDapp, are now ready! We have now defined a flow that we can start on our node to completely automate the process of issuing an IOU onto the ledger. All that’s left is to spin up some nodes and test our CorDapp.
Running our CorDapp¶
Now that we’ve written a CorDapp, it’s time to test it by running it on some real Corda nodes.
Deploying our CorDapp¶
Let’s take a look at the nodes we’re going to deploy. Open the project’s build.gradle
file and scroll down to the
task deployNodes
section. This section defines three nodes. There are two standard nodes (PartyA
and
PartyB
), plus a special network map/notary node that is running the network map service and advertises a validating notary
service.
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
nodeDefaults {
cordapps = [
"net.corda:corda-finance-contracts:$corda_release_version",
"net.corda:corda-finance-workflows:$corda_release_version",
"net.corda:corda-confidential-identities:$corda_release_version"
]
}
directory "./build/nodes"
node {
name "O=Notary,L=London,C=GB"
notary = [validating : true]
p2pPort 10002
rpcPort 10003
}
node {
name "O=PartyA,L=London,C=GB"
p2pPort 10005
rpcPort 10006
webPort 10007
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL]]]
}
node {
name "O=PartyB,L=New York,C=US"
p2pPort 10008
rpcPort 10009
webPort 10010
sshdPort 10024
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
}
}
We can run this deployNodes
task using Gradle. For each node definition, Gradle will:
- Package the project’s source files into a CorDapp jar
- Create a new node in
build/nodes
with our CorDapp already installed
We can do that now by running the following commands from the root of the project:
// On Windows
gradlew clean deployNodes
// On Mac
./gradlew clean deployNodes
Running the nodes¶
Running deployNodes
will build the nodes under build/nodes
. If we navigate to one of these folders, we’ll see
the three node folders. Each node folder has the following structure:
. |____corda.jar // The runnable node |____corda-webserver.jar // The node's webserver (The notary doesn't need a web server) |____node.conf // The node's configuration file |____cordapps |____java/kotlin-source-0.1.jar // Our IOU CorDapp
Let’s start the nodes by running the following commands from the root of the project:
// On Windows
build/nodes/runnodes.bat
// On Mac
build/nodes/runnodes
This will start a terminal window for each node, and an additional terminal window for each node’s webserver - five terminal windows in all. Give each node a moment to start - you’ll know it’s ready when its terminal windows displays the message, “Welcome to the Corda interactive shell.”.
Interacting with the nodes¶
Now that our nodes are running, let’s order one of them to create an IOU by kicking off our IOUFlow
. In a larger
app, we’d generally provide a web API sitting on top of our node. Here, for simplicity, we’ll be interacting with the
node via its built-in CRaSH shell.
Go to the terminal window displaying the CRaSH shell of PartyA. Typing help
will display a list of the available
commands.
注解
Local terminal shell is available only in a development mode. In production environment SSH server can be enabled. More about SSH and how to connect can be found on the Node shell page.
We want to create an IOU of 99 with PartyB. We start the IOUFlow
by typing:
start IOUFlow iouValue: 99, otherParty: "O=PartyB,L=New York,C=US"
This single command will cause PartyA and PartyB to automatically agree an IOU. This is one of the great advantages of the flow framework - it allows you to reduce complex negotiation and update processes into a single function call.
If the flow worked, it should have recorded a new IOU in the vaults of both PartyA and PartyB. Let’s check.
We can check the contents of each node’s vault by running:
run vaultQuery contractStateType: com.template.IOUState
The vaults of PartyA and PartyB should both display the following output:
states:
- state:
data:
value: 99
lender: "C=GB,L=London,O=PartyA"
borrower: "C=US,L=New York,O=PartyB"
participants:
- "C=GB,L=London,O=PartyA"
- "C=US,L=New York,O=PartyB"
contract: "com.template.contract.IOUContract"
notary: "C=GB,L=London,O=Notary"
encumbrance: null
constraint:
attachmentId: "F578320232CAB87BB1E919F3E5DB9D81B7346F9D7EA6D9155DC0F7BA8E472552"
ref:
txhash: "5CED068E790A347B0DD1C6BB5B2B463406807F95E080037208627565E6A2103B"
index: 0
statesMetadata:
- ref:
txhash: "5CED068E790A347B0DD1C6BB5B2B463406807F95E080037208627565E6A2103B"
index: 0
contractStateClassName: "com.template.state.IOUState"
recordedTime: 1506415268.875000000
consumedTime: null
status: "UNCONSUMED"
notary: "C=GB,L=London,O=Notary"
lockId: null
lockUpdateTime: 1506415269.548000000
totalStatesAvailable: -1
stateTypes: "UNCONSUMED"
otherResults: []
This is the transaction issuing our IOUState
onto a ledger.
However, if we run the same command on the other node (the notary), we will see the following:
{
"states" : [ ],
"statesMetadata" : [ ],
"totalStatesAvailable" : -1,
"stateTypes" : "UNCONSUMED",
"otherResults" : [ ]
}
This is the result of Corda’s privacy model. Because the notary was not involved in the transaction and had no need to see the data, the transaction was not distributed to them.
Conclusion¶
We have written a simple CorDapp that allows IOUs to be issued onto the ledger. Our CorDapp is made up of two key parts:
- The
IOUState
, representing IOUs on the blockchain - The
IOUFlow
, orchestrating the process of agreeing the creation of an IOU on-ledger
After completing this tutorial, your CorDapp should look like this:
Next steps¶
There are a number of improvements we could make to this CorDapp:
- We could add unit tests, using the contract-test and flow-test frameworks
- We could change
IOUState.value
from an integer to a proper amount of a given currency - We could add an API, to make it easier to interact with the CorDapp
But for now, the biggest priority is to add an IOUContract
imposing constraints on the evolution of each
IOUState
over time. This will be the focus of our next tutorial.
By this point, your dev environment should be set up, you’ve run your first CorDapp, and you’re familiar with Corda’s key concepts. What comes next?
If you’re a developer, the next step is to write your own CorDapp. CorDapps are applications that are installed on one or more Corda nodes, and that allow the node’s operator to instruct their node to perform some new process - anything from issuing a debt instrument to making a restaurant booking.
Our use-case¶
We will write a CorDapp to model IOUs on the blockchain. Each IOU – short for “I O(we) (yo)U” – will record the fact that one node owes another node a certain amount. This simple CorDapp will showcase several key benefits of Corda as a blockchain platform:
- Privacy - Since IOUs represent sensitive information, we will be taking advantage of Corda’s ability to only share ledger updates with other nodes on a need-to-know basis, instead of using a gossip protocol to share this information with every node on the network as you would with a traditional blockchain platform
- Well-known identities - Each Corda node has a well-known identity on the network. This allows us to write code in terms of real identities, rather than anonymous public keys
- Re-use of existing, proven technologies - We will be writing our CorDapp using standard Java. It will run on a Corda node, which is simply a Java process and runs on a regular Java machine (e.g. on your local machine or in the cloud). The nodes will store their data in a standard SQL database
CorDapps usually define at least three things:
- States - the (possibly shared) facts that are written to the ledger
- Flows - the procedures for carrying out specific ledger updates
- Contracts - the constraints governing how states of a given type can evolve over time
Our IOU CorDapp is no exception. It will define the following components:
The IOUState¶
Our state will be the IOUState
, representing an IOU. It will contain the IOU’s value, its lender and its borrower. We can visualize
IOUState
as follows:
The IOUFlow¶
Our flow will be the IOUFlow
. This flow will completely automate the process of issuing a new IOU onto a ledger. It has the following
steps:
The IOUContract¶
For this tutorial, we will use the default TemplateContract
. We will update it to create a fully-fledged IOUContract
in the next
tutorial.
Progress so far¶
We’ve designed a simple CorDapp that will allow nodes to agree new IOUs on the blockchain.
Next, we’ll take a look at the template project we’ll be using as the basis for our CorDapp.
Hello, World! Pt.2 - Contract constraints¶
注解
This tutorial extends the CorDapp built during the Hello, World tutorial.
In the Hello, World tutorial, we built a CorDapp allowing us to model IOUs on ledger. Our CorDapp was made up of two elements:
- An
IOUState
, representing IOUs on the blockchain - An
IOUFlow
andIOUFlowResponder
flow pair, orchestrating the process of agreeing the creation of an IOU on-ledger
However, our CorDapp did not impose any constraints on the evolution of IOUs on the blockchain over time. Anyone was free to create IOUs of any value, between any party.
In this tutorial, we’ll write a contract to imposes rules on how an IOUState
can change over time. In turn, this
will require some small changes to the flow we defined in the previous tutorial.
We’ll start by writing the contract.
Writing the contract¶
It’s easy to imagine that most CorDapps will want to impose some constraints on how their states evolve over time:
- A cash CorDapp will not want to allow users to create transactions that generate money out of thin air (at least without the involvement of a central bank or commercial bank)
- A loan CorDapp might not want to allow the creation of negative-valued loans
- An asset-trading CorDapp will not want to allow users to finalise a trade without the agreement of their counterparty
In Corda, we impose constraints on how states can evolve using contracts.
注解
Contracts in Corda are very different to the smart contracts of other distributed ledger platforms. They are not stateful objects representing the current state of the world. Instead, like a real-world contract, they simply impose rules on what kinds of transactions are allowed.
Every state has an associated contract. A transaction is invalid if it does not satisfy the contract of every input and output state in the transaction.
The Contract interface¶
Just as every Corda state must implement the ContractState
interface, every contract must implement the
Contract
interface:
interface Contract {
// Implements the contract constraints in code.
@Throws(IllegalArgumentException::class)
fun verify(tx: LedgerTransaction)
}
We can see that Contract
expresses its constraints through a verify
function that takes a transaction as input,
and:
- Throws an
IllegalArgumentException
if it rejects the transaction proposal- Returns silently if it accepts the transaction proposal
Controlling IOU evolution¶
What would a good contract for an IOUState
look like? There is no right or wrong answer - it depends on how you
want your CorDapp to behave.
For our CorDapp, let’s impose the constraint that we only want to allow the creation of IOUs. We don’t want nodes to transfer them or redeem them for cash. One way to enforce this behaviour would be by imposing the following constraints:
- A transaction involving IOUs must consume zero inputs, and create one output of type
IOUState
- The transaction should also include a
Create
command, indicating the transaction’s intent (more on commands shortly)
We might also want to impose some constraints on the properties of the issued IOUState
:
- Its value must be non-negative
- The lender and the borrower cannot be the same entity
And finally, we’ll want to impose constraints on who is required to sign the transaction:
- The IOU’s lender must sign
- The IOU’s borrower must sign
We can picture this transaction as follows:

Defining IOUContract¶
Let’s write a contract that enforces these constraints. We’ll do this by modifying either TemplateContract.java
or
TemplateContract.kt
and updating to define an IOUContract
:
// Add this import:
import net.corda.core.contracts.*
class IOUContract : Contract {
companion object {
const val ID = "com.template.IOUContract"
}
// Our Create command.
class Create : CommandData
override fun verify(tx: LedgerTransaction) {
val command = tx.commands.requireSingleCommand<Create>()
requireThat {
// Constraints on the shape of the transaction.
"No inputs should be consumed when issuing an IOU." using (tx.inputs.isEmpty())
"There should be one output state of type IOUState." using (tx.outputs.size == 1)
// IOU-specific constraints.
val output = tx.outputsOfType<IOUState>().single()
"The IOU's value must be non-negative." using (output.value > 0)
"The lender and the borrower cannot be the same entity." using (output.lender != output.borrower)
// Constraints on the signers.
val expectedSigners = listOf(output.borrower.owningKey, output.lender.owningKey)
"There must be two signers." using (command.signers.toSet().size == 2)
"The borrower and lender must be signers." using (command.signers.containsAll(expectedSigners))
}
}
}
// Add these imports:
import net.corda.core.contracts.CommandWithParties;
import net.corda.core.identity.Party;
import java.security.PublicKey;
import java.util.Arrays;
import java.util.List;
import static net.corda.core.contracts.ContractsDSL.requireSingleCommand;
// Replace TemplateContract's definition with:
public class IOUContract implements Contract {
public static final String ID = "com.template.IOUContract";
// Our Create command.
public static class Create implements CommandData {
}
@Override
public void verify(LedgerTransaction tx) {
final CommandWithParties<IOUContract.Create> command = requireSingleCommand(tx.getCommands(), IOUContract.Create.class);
// Constraints on the shape of the transaction.
if (!tx.getInputs().isEmpty())
throw new IllegalArgumentException("No inputs should be consumed when issuing an IOU.");
if (!(tx.getOutputs().size() == 1))
throw new IllegalArgumentException("There should be one output state of type IOUState.");
// IOU-specific constraints.
final IOUState output = tx.outputsOfType(IOUState.class).get(0);
final Party lender = output.getLender();
final Party borrower = output.getBorrower();
if (output.getValue() <= 0)
throw new IllegalArgumentException("The IOU's value must be non-negative.");
if (lender.equals(borrower))
throw new IllegalArgumentException("The lender and the borrower cannot be the same entity.");
// Constraints on the signers.
final List<PublicKey> requiredSigners = command.getSigners();
final List<PublicKey> expectedSigners = Arrays.asList(borrower.getOwningKey(), lender.getOwningKey());
if (requiredSigners.size() != 2)
throw new IllegalArgumentException("There must be two signers.");
if (!(requiredSigners.containsAll(expectedSigners)))
throw new IllegalArgumentException("The borrower and lender must be signers.");
}
}
If you’re following along in Java, you’ll also need to rename TemplateContract.java
to IOUContract.java
.
Let’s walk through this code step by step.
The Create command¶
The first thing we add to our contract is a command. Commands serve two functions:
- They indicate the transaction’s intent, allowing us to perform different verification for different types of transaction. For example, a transaction proposing the creation of an IOU could have to meet different constraints to one redeeming an IOU
- They allow us to define the required signers for the transaction. For example, IOU creation might require signatures from the lender only, whereas the transfer of an IOU might require signatures from both the IOU’s borrower and lender
Our contract has one command, a Create
command. All commands must implement the CommandData
interface.
The CommandData
interface is a simple marker interface for commands. In fact, its declaration is only two words
long (Kotlin interfaces do not require a body):
interface CommandData
The verify logic¶
Our contract also needs to define the actual contract constraints by implementing verify
. Our goal in writing the
verify
function is to write a function that, given a transaction:
- Throws an
IllegalArgumentException
if the transaction is considered invalid - Does not throw an exception if the transaction is considered valid
In deciding whether the transaction is valid, the verify
function only has access to the contents of the
transaction:
tx.inputs
, which lists the inputstx.outputs
, which lists the outputstx.commands
, which lists the commands and their associated signers
As well as to the transaction’s attachments and time-window, which we won’t use here.
Based on the constraints enumerated above, we need to write a verify
function that rejects a transaction if any of
the following are true:
- The transaction doesn’t include a
Create
command - The transaction has inputs
- The transaction doesn’t have exactly one output
- The IOU itself is invalid
- The transaction doesn’t require the lender’s signature
Our first constraint is around the transaction’s commands. We use Corda’s requireSingleCommand
function to test for
the presence of a single Create
command.
If the Create
command isn’t present, or if the transaction has multiple Create
commands, an exception will be
thrown and contract verification will fail.
We also want our transaction to have no inputs and only a single output - an issuance transaction.
In Kotlin, we impose these and the subsequent constraints using Corda’s built-in requireThat
block. requireThat
provides a terse way to write the following:
- If the condition on the right-hand side doesn’t evaluate to true…
- …throw an
IllegalArgumentException
with the message on the left-hand side
As before, the act of throwing this exception causes the transaction to be considered invalid.
In Java, we simply throw an IllegalArgumentException
manually instead.
We want to impose two constraints on the IOUState
itself:
- Its value must be non-negative
- The lender and the borrower cannot be the same entity
You can see that we’re not restricted to only writing constraints inside verify
. We can also write
other statements - in this case, extracting the transaction’s single IOUState
and assigning it to a variable.
Finally, we require both the lender and the borrower to be required signers on the transaction. A transaction’s
required signers is equal to the union of all the signers listed on the commands. We therefore extract the signers from
the Create
command we retrieved earlier.
This is an absolutely essential constraint - it ensures that no IOUState
can ever be created on the blockchain without
the express agreement of both the lender and borrower nodes.
Progress so far¶
We’ve now written an IOUContract
constraining the evolution of each IOUState
over time:
- An
IOUState
can only be created, not transferred or redeemed - Creating an
IOUState
requires an issuance transaction with no inputs, a singleIOUState
output, and aCreate
command - The
IOUState
created by the issuance transaction must have a non-negative value, and the lender and borrower must be different entities
Next, we’ll update the IOUFlow
so that it obeys these contract constraints when issuing an IOUState
onto the
ledger.
Updating the flow¶
We now need to update our flow to achieve three things:
- Verifying that the transaction proposal we build fulfills the
IOUContract
constraints - Updating the lender’s side of the flow to request the borrower’s signature
- Creating a response flow for the borrower that responds to the signature request from the lender
We’ll do this by modifying the flow we wrote in the previous tutorial.
Verifying the transaction¶
In IOUFlow.java
/Flows.kt
, change the imports block to the following:
import co.paralleluniverse.fibers.Suspendable
import net.corda.core.contracts.Command
import net.corda.core.flows.CollectSignaturesFlow
import net.corda.core.flows.FinalityFlow
import net.corda.core.flows.FlowLogic
import net.corda.core.flows.InitiatingFlow
import net.corda.core.flows.StartableByRPC
import net.corda.core.identity.Party
import net.corda.core.transactions.TransactionBuilder
import net.corda.core.utilities.ProgressTracker
import co.paralleluniverse.fibers.Suspendable;
import net.corda.core.contracts.Command;
import net.corda.core.flows.*;
import net.corda.core.identity.Party;
import net.corda.core.transactions.SignedTransaction;
import net.corda.core.transactions.TransactionBuilder;
import net.corda.core.utilities.ProgressTracker;
import java.security.PublicKey;
import java.util.Arrays;
import java.util.List;
And update IOUFlow.call
to the following:
// We retrieve the notary identity from the network map.
val notary = serviceHub.networkMapCache.notaryIdentities[0]
// We create the transaction components.
val outputState = IOUState(iouValue, ourIdentity, otherParty)
val command = Command(IOUContract.Create(), listOf(ourIdentity.owningKey, otherParty.owningKey))
// We create a transaction builder and add the components.
val txBuilder = TransactionBuilder(notary = notary)
.addOutputState(outputState, IOUContract.ID)
.addCommand(command)
// Verifying the transaction.
txBuilder.verify(serviceHub)
// Signing the transaction.
val signedTx = serviceHub.signInitialTransaction(txBuilder)
// Creating a session with the other party.
val otherPartySession = initiateFlow(otherParty)
// Obtaining the counterparty's signature.
val fullySignedTx = subFlow(CollectSignaturesFlow(signedTx, listOf(otherPartySession), CollectSignaturesFlow.tracker()))
// Finalising the transaction.
subFlow(FinalityFlow(fullySignedTx, otherPartySession))
// We retrieve the notary identity from the network map.
Party notary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
// We create the transaction components.
IOUState outputState = new IOUState(iouValue, getOurIdentity(), otherParty);
List<PublicKey> requiredSigners = Arrays.asList(getOurIdentity().getOwningKey(), otherParty.getOwningKey());
Command command = new Command<>(new IOUContract.Create(), requiredSigners);
// We create a transaction builder and add the components.
TransactionBuilder txBuilder = new TransactionBuilder(notary)
.addOutputState(outputState, IOUContract.ID)
.addCommand(command);
// Verifying the transaction.
txBuilder.verify(getServiceHub());
// Signing the transaction.
SignedTransaction signedTx = getServiceHub().signInitialTransaction(txBuilder);
// Creating a session with the other party.
FlowSession otherPartySession = initiateFlow(otherParty);
// Obtaining the counterparty's signature.
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(
signedTx, Arrays.asList(otherPartySession), CollectSignaturesFlow.tracker()));
// Finalising the transaction.
subFlow(new FinalityFlow(fullySignedTx, otherPartySession));
return null;
In the original CorDapp, we automated the process of notarising a transaction and recording it in every party’s vault
by invoking a built-in flow called FinalityFlow
as a subflow. We’re going to use another pre-defined flow,
CollectSignaturesFlow
, to gather the borrower’s signature.
First, we need to update the command. We are now using IOUContract.Create
, rather than
TemplateContract.Commands.Action
. We also want to make the borrower a required signer, as per the contract
constraints. This is as simple as adding the borrower’s public key to the transaction’s command.
We also need to add the output state to the transaction using a reference to the IOUContract
, instead of to the old
TemplateContract
.
Now that our state is governed by a real contract, we’ll want to check that our transaction proposal satisfies these
requirements before kicking off the signing process. We do this by calling TransactionBuilder.verify
on our
transaction proposal before finalising it by adding our signature.
Requesting the borrower’s signature¶
Previously we wrote a responder flow for the borrower in order to receive the finalised transaction from the lender. We use this same flow to first request their signature over the transaction.
We gather the borrower’s signature using CollectSignaturesFlow
, which takes:
- A transaction signed by the flow initiator
- A list of flow-sessions between the flow initiator and the required signers
And returns a transaction signed by all the required signers.
We can then pass this fully-signed transaction into FinalityFlow
.
Updating the borrower’s flow¶
On the lender’s side, we used CollectSignaturesFlow
to automate the collection of signatures. To allow the borrower
to respond, we need to update its responder flow to first receive the partially signed transaction for signing. Update
IOUFlowResponder.call
to be the following:
@Suspendable
override fun call() {
val signTransactionFlow = object : SignTransactionFlow(otherPartySession) {
override fun checkTransaction(stx: SignedTransaction) = requireThat {
val output = stx.tx.outputs.single().data
"This must be an IOU transaction." using (output is IOUState)
val iou = output as IOUState
"The IOU's value can't be too high." using (iou.value < 100)
}
}
val expectedTxId = subFlow(signTransactionFlow).id
subFlow(ReceiveFinalityFlow(otherPartySession, expectedTxId))
}
@Suspendable
@Override
public Void call() throws FlowException {
class SignTxFlow extends SignTransactionFlow {
private SignTxFlow(FlowSession otherPartySession) {
super(otherPartySession);
}
@Override
protected void checkTransaction(SignedTransaction stx) {
requireThat(require -> {
ContractState output = stx.getTx().getOutputs().get(0).getData();
require.using("This must be an IOU transaction.", output instanceof IOUState);
IOUState iou = (IOUState) output;
require.using("The IOU's value can't be too high.", iou.getValue() < 100);
return null;
});
}
}
SecureHash expectedTxId = subFlow(new SignTxFlow(otherPartySession)).getId();
subFlow(new ReceiveFinalityFlow(otherPartySession, expectedTxId));
return null;
}
We could write our own flow to handle this process. However, there is also a pre-defined flow called
SignTransactionFlow
that can handle the process automatically. The only catch is that SignTransactionFlow
is an
abstract class - we must subclass it and override SignTransactionFlow.checkTransaction
.
CheckTransactions¶
SignTransactionFlow
will automatically verify the transaction and its signatures before signing it. However, just
because a transaction is contractually valid doesn’t mean we necessarily want to sign. What if we don’t want to deal
with the counterparty in question, or the value is too high, or we’re not happy with the transaction’s structure?
Overriding SignTransactionFlow.checkTransaction
allows us to define these additional checks. In our case, we are
checking that:
- The transaction involves an
IOUState
- this ensures thatIOUContract
will be run to verify the transaction - The IOU’s value is less than some amount (100 in this case)
If either of these conditions are not met, we will not sign the transaction - even if the transaction and its signatures are contractually valid.
Once we’ve defined the SignTransactionFlow
subclass, we invoke it using FlowLogic.subFlow
, and the
communication with the borrower’s and the lender’s flow is conducted automatically.
SignedTransactionFlow
returns the newly signed transaction. We pass in the transaction’s ID to ReceiveFinalityFlow
to ensure we are recording the correct notarised transaction from the lender.
Conclusion¶
We have now updated our flow to verify the transaction and gather the lender’s signature, in line with the constraints
defined in IOUContract
. We can now re-run our updated CorDapp, using the
same instructions as before.
Our CorDapp now imposes restrictions on the issuance of IOUs. Most importantly, IOU issuance now requires agreement from both the lender and the borrower before an IOU can be created on the blockchain. This prevents either the lender or the borrower from unilaterally updating the ledger in a way that only benefits themselves.
After completing this tutorial, your CorDapp should look like this:
- Java: https://github.com/corda/corda-tut2-solution-java
- Kotlin: https://github.com/corda/corda-tut2-solution-kotlin
You should now be ready to develop your own CorDapps. You can also find a list of sample CorDapps here. As you write CorDapps, you’ll also want to learn more about the Corda API.
If you get stuck at any point, please reach out on Slack or Stack Overflow.
The remaining tutorials cover individual platform features in isolation. They don’t depend on the code from the Hello, World tutorials, and can be read in any order.
其他的教程分别涵盖了单独的平台功能。他们并不依赖于 Hello, World 教程的代码,可以按照任何顺序阅读。
Writing a contract¶
This tutorial will take you through writing a contract, using a simple commercial paper contract as an example. Smart contracts in Corda have three key elements:
- Executable code (validation logic)
- State objects
- Commands
The core of a smart contract is the executable code which validates changes to state objects in transactions. State
objects are the data held on the ledger, which represent the current state of an instance of a contract, and are used as
inputs and outputs of transactions. Commands are additional data included in transactions to describe what is going on,
used to instruct the executable code on how to verify the transaction. For example an Issue
command may indicate
that the validation logic should expect to see an output which does not exist as an input, issued by the same entity
that signed the command.
The first thing to think about with a new contract is the lifecycle of contract states, how are they issued, what happens to them after they are issued, and how are they destroyed (if applicable). For the commercial paper contract, states are issued by a legal entity which wishes to create a contract to pay money in the future (the maturity date), in return for a lesser payment now. They are then transferred (moved) to another owner as part of a transaction where the issuer receives funds in payment, and later (after the maturity date) are destroyed (redeemed) by paying the owner the face value of the commercial paper.
This lifecycle for commercial paper is illustrated in the diagram below:

Starting the commercial paper class¶
A smart contract is a class that implements the Contract
interface. This can be either implemented directly, as done
here, or by subclassing an abstract contract such as OnLedgerAsset
. The heart of any contract in Corda is the
verify
function, which determines whether a given transaction is valid. This example shows how to write a
verify
function from scratch.
The code in this tutorial is available in both Kotlin and Java. You can quickly switch between them to get a feeling for Kotlin’s syntax.
class CommercialPaper : Contract {
override fun verify(tx: LedgerTransaction) {
TODO()
}
}
public class CommercialPaper implements Contract {
@Override
public void verify(LedgerTransaction tx) {
throw new UnsupportedOperationException();
}
}
Every contract must have at least a verify
method. The verify method returns nothing. This is intentional: the
function either completes correctly, or throws an exception, in which case the transaction is rejected.
So far, so simple. Now we need to define the commercial paper state, which represents the fact of ownership of a piece of issued paper.
States¶
A state is a class that stores data that is checked by the contract. A commercial paper state is structured as below:

data class State(
val issuance: PartyAndReference,
override val owner: AbstractParty,
val faceValue: Amount<Issued<Currency>>,
val maturityDate: Instant
) : OwnableState {
override val participants = listOf(owner)
fun withoutOwner() = copy(owner = AnonymousParty(NullKeys.NullPublicKey))
override fun withNewOwner(newOwner: AbstractParty) = CommandAndState(CommercialPaper.Commands.Move(), copy(owner = newOwner))
}
public class State implements OwnableState {
private PartyAndReference issuance;
private AbstractParty owner;
private Amount<Issued<Currency>> faceValue;
private Instant maturityDate;
public State() {
} // For serialization
public State(PartyAndReference issuance, AbstractParty owner, Amount<Issued<Currency>> faceValue,
Instant maturityDate) {
this.issuance = issuance;
this.owner = owner;
this.faceValue = faceValue;
this.maturityDate = maturityDate;
}
public State copy() {
return new State(this.issuance, this.owner, this.faceValue, this.maturityDate);
}
public State withoutOwner() {
return new State(this.issuance, new AnonymousParty(NullKeys.NullPublicKey.INSTANCE), this.faceValue, this.maturityDate);
}
@NotNull
@Override
public CommandAndState withNewOwner(@NotNull AbstractParty newOwner) {
return new CommandAndState(new CommercialPaper.Commands.Move(), new State(this.issuance, newOwner, this.faceValue, this.maturityDate));
}
public PartyAndReference getIssuance() {
return issuance;
}
public AbstractParty getOwner() {
return owner;
}
public Amount<Issued<Currency>> getFaceValue() {
return faceValue;
}
public Instant getMaturityDate() {
return maturityDate;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
State state = (State) o;
if (issuance != null ? !issuance.equals(state.issuance) : state.issuance != null) return false;
if (owner != null ? !owner.equals(state.owner) : state.owner != null) return false;
if (faceValue != null ? !faceValue.equals(state.faceValue) : state.faceValue != null) return false;
return !(maturityDate != null ? !maturityDate.equals(state.maturityDate) : state.maturityDate != null);
}
@Override
public int hashCode() {
int result = issuance != null ? issuance.hashCode() : 0;
result = 31 * result + (owner != null ? owner.hashCode() : 0);
result = 31 * result + (faceValue != null ? faceValue.hashCode() : 0);
result = 31 * result + (maturityDate != null ? maturityDate.hashCode() : 0);
return result;
}
@NotNull
@Override
public List<AbstractParty> getParticipants() {
return ImmutableList.of(this.owner);
}
}
We define a class that implements the ContractState
interface.
We have four fields in our state:
issuance
, a reference to a specific piece of commercial paper issued by some party.owner
, the public key of the current owner. This is the same concept as seen in Bitcoin: the public key has no attached identity and is expected to be one-time-use for privacy reasons. However, unlike in Bitcoin, we model ownership at the level of individual states rather than as a platform-level concept as we envisage many (possibly most) contracts on the platform will not represent “owner/issuer” relationships, but “party/party” relationships such as a derivative contract.faceValue
, anAmount<Issued<Currency>>
, which wraps an integer number of pennies and a currency that is specific to some issuer (e.g. a regular bank, a central bank, etc). You can read more about this very common type in API: 核心类型.maturityDate
, an Instant, which is a type from the Java 8 standard time library. It defines a point on the timeline.
States are immutable, and thus the class is defined as immutable as well. The data
modifier in the Kotlin version
causes the compiler to generate the equals/hashCode/toString methods automatically, along with a copy method that can
be used to create variants of the original object. Data classes are similar to case classes in Scala, if you are
familiar with that language. The withoutOwner
method uses the auto-generated copy method to return a version of
the state with the owner public key blanked out: this will prove useful later.
The Java code compiles to almost identical bytecode as the Kotlin version, but as you can see, is much more verbose.
Commands¶
The validation logic for a contract may vary depending on what stage of a state’s lifecycle it is automating. So it can be useful to pass additional data into the contract code that isn’t represented by the states which exist permanently in the ledger, in order to clarify intent of a transaction.
For this purpose we have commands. Often they don’t need to contain any data at all, they just need to exist. A command is a piece of data associated with some signatures. By the time the contract runs the signatures have already been checked, so from the contract code’s perspective, a command is simply a data structure with a list of attached public keys. Each key had a signature proving that the corresponding private key was used to sign. Because of this approach contracts never actually interact or work with digital signatures directly.
Let’s define a few commands now:
interface Commands : CommandData {
class Move : TypeOnlyCommandData(), Commands
class Redeem : TypeOnlyCommandData(), Commands
class Issue : TypeOnlyCommandData(), Commands
}
public static class Commands implements CommandData {
public static class Move extends Commands {
@Override
public boolean equals(Object obj) {
return obj instanceof Move;
}
}
public static class Redeem extends Commands {
@Override
public boolean equals(Object obj) {
return obj instanceof Redeem;
}
}
public static class Issue extends Commands {
@Override
public boolean equals(Object obj) {
return obj instanceof Issue;
}
}
}
We define a simple grouping interface or static class, this gives us a type that all our commands have in common,
then we go ahead and create three commands: Move
, Redeem
, Issue
. TypeOnlyCommandData
is a helpful utility
for the case when there’s no data inside the command; only the existence matters. It defines equals and hashCode
such that any instances always compare equal and hash to the same value.
The verify function¶
The heart of a smart contract is the code that verifies a set of state transitions (a transaction). The function is simple: it’s given a class representing the transaction, and if the function returns then the transaction is considered acceptable. If it throws an exception, the transaction is rejected.
Each transaction can have multiple input and output states of different types. The set of contracts to run is decided by taking the code references inside each state. Each contract is run only once. As an example, a transaction that includes 2 cash states and 1 commercial paper state as input, and has as output 1 cash state and 1 commercial paper state, will run two contracts one time each: Cash and CommercialPaper.
override fun verify(tx: LedgerTransaction) {
// Group by everything except owner: any modification to the CP at all is considered changing it fundamentally.
val groups = tx.groupStates(State::withoutOwner)
// There are two possible things that can be done with this CP. The first is trading it. The second is redeeming
// it for cash on or after the maturity date.
val command = tx.commands.requireSingleCommand<CommercialPaper.Commands>()
@Override
public void verify(LedgerTransaction tx) {
List<InOutGroup<State, State>> groups = tx.groupStates(State.class, State::withoutOwner);
CommandWithParties<Commands> cmd = requireSingleCommand(tx.getCommands(), Commands.class);
We start by using the groupStates
method, which takes a type and a function. State grouping is a way of ensuring
your contract can handle multiple unrelated states of the same type in the same transaction, which is needed for
splitting/merging of assets, atomic swaps and so on. More on this next.
The second line does what the code suggests: it searches for a command object that inherits from the
CommercialPaper.Commands
supertype, and either returns it, or throws an exception if there’s zero or more than one
such command.
Using state groups¶
The simplest way to write a smart contract would be to say that each transaction can have a single input state and a single output state of the kind covered by that contract. This would be easy for the developer, but would prevent many important use cases.
The next easiest way to write a contract would be to iterate over each input state and expect it to have an output state. Now you can build a single transaction that, for instance, moves two different cash states in different currencies simultaneously. But it gets complicated when you want to issue or exit one state at the same time as moving another.
Things get harder still once you want to split and merge states. We say states are fungible if they are treated identically to each other by the recipient, despite the fact that they aren’t quite identical. Dollar bills are fungible because even though one may be worn/a bit dirty and another may be crisp and new, they are still both worth exactly $1. Likewise, ten $1 bills are almost exactly equivalent to one $10 bill. On the other hand, $10 and £10 are not fungible: if you tried to pay for something that cost £20 with $10+£10 notes your trade would not be accepted.
To make all this easier the contract API provides a notion of groups. A group is a set of input states and output states that should be checked for validity together.
Consider the following simplified currency trade transaction:
- Input: $12,000 owned by Alice (A)
- Input: $3,000 owned by Alice (A)
- Input: £10,000 owned by Bob (B)
- Output: £10,000 owned by Alice (B)
- Output: $15,000 owned by Bob (A)
In this transaction Alice and Bob are trading $15,000 for £10,000. Alice has her money in the form of two different inputs e.g. because she received the dollars in two payments. The input and output amounts do balance correctly, but the cash smart contract must consider the pounds and the dollars separately because they are not fungible: they cannot be merged together. So we have two groups: A and B.
The LedgerTransaction.groupStates
method handles this logic for us: firstly, it selects only states of the
given type (as the transaction may include other types of state, such as states representing bond ownership, or a
multi-sig state) and then it takes a function that maps a state to a grouping key. All states that share the same key are
grouped together. In the case of the cash example above, the grouping key would be the currency.
In this kind of contract we don’t want CP to be fungible: merging and splitting is (in our example) not allowed. So we just use a copy of the state minus the owner field as the grouping key.
Here are some code examples:
// Type of groups is List<InOutGroup<State, Pair<PartyReference, Currency>>>
val groups = tx.groupStates { it: Cash.State -> it.amount.token }
for ((inputs, outputs, key) in groups) {
// Either inputs or outputs could be empty.
val (deposit, currency) = key
...
}
List<InOutGroup<State, Pair<PartyReference, Currency>>> groups = tx.groupStates(Cash.State.class, s -> Pair(s.deposit, s.amount.currency))
for (InOutGroup<State, Pair<PartyReference, Currency>> group : groups) {
List<State> inputs = group.getInputs();
List<State> outputs = group.getOutputs();
Pair<PartyReference, Currency> key = group.getKey();
...
}
The groupStates
call uses the provided function to calculate a “grouping key”. All states that have the same
grouping key are placed in the same group. A grouping key can be anything that implements equals/hashCode, but it’s
always an aggregate of the fields that shouldn’t change between input and output. In the above example we picked the
fields we wanted and packed them into a Pair
. It returns a list of InOutGroup
, which is just a holder for the
inputs, outputs and the key that was used to define the group. In the Kotlin version we unpack these using destructuring
to get convenient access to the inputs, the outputs, the deposit data and the currency. The Java version is more
verbose, but equivalent.
The rules can then be applied to the inputs and outputs as if it were a single transaction. A group may have zero inputs or zero outputs: this can occur when issuing assets onto the ledger, or removing them.
In this example, we do it differently and use the state class itself as the aggregator. We just blank out fields that are allowed to change, making the grouping key be “everything that isn’t that”:
val groups = tx.groupStates(State::withoutOwner)
List<InOutGroup<State, State>> groups = tx.groupStates(State.class, State::withoutOwner);
For large states with many fields that must remain constant and only one or two that are really mutable, it’s often
easier to do things this way than to specifically name each field that must stay the same. The withoutOwner
function
here simply returns a copy of the object but with the owner
field set to NullPublicKey
, which is just a public key
of all zeros. It’s invalid and useless, but that’s OK, because all we’re doing is preventing the field from mattering
in equals and hashCode.
Checking the requirements¶
After extracting the command and the groups, we then iterate over each group and verify it meets the required business logic.
val timeWindow: TimeWindow? = tx.timeWindow
for ((inputs, outputs, _) in groups) {
when (command.value) {
is Commands.Move -> {
val input = inputs.single()
requireThat {
"the transaction is signed by the owner of the CP" using (input.owner.owningKey in command.signers)
"the state is propagated" using (outputs.size == 1)
// Don't need to check anything else, as if outputs.size == 1 then the output is equal to
// the input ignoring the owner field due to the grouping.
}
}
is Commands.Redeem -> {
// Redemption of the paper requires movement of on-ledger cash.
val input = inputs.single()
val received = tx.outputs.map { it.data }.sumCashBy(input.owner)
val time = timeWindow?.fromTime ?: throw IllegalArgumentException("Redemptions must be timestamped")
requireThat {
"the paper must have matured" using (time >= input.maturityDate)
"the received amount equals the face value" using (received == input.faceValue)
"the paper must be destroyed" using outputs.isEmpty()
"the transaction is signed by the owner of the CP" using (input.owner.owningKey in command.signers)
}
}
is Commands.Issue -> {
val output = outputs.single()
val time = timeWindow?.untilTime ?: throw IllegalArgumentException("Issuances must be timestamped")
requireThat {
// Don't allow people to issue commercial paper under other entities identities.
"output states are issued by a command signer" using (output.issuance.party.owningKey in command.signers)
"output values sum to more than the inputs" using (output.faceValue.quantity > 0)
"the maturity date is not in the past" using (time < output.maturityDate)
// Don't allow an existing CP state to be replaced by this issuance.
"can't reissue an existing state" using inputs.isEmpty()
}
}
else -> throw IllegalArgumentException("Unrecognised command")
}
}
TimeWindow timeWindow = tx.getTimeWindow();
for (InOutGroup group : groups) {
List<State> inputs = group.getInputs();
List<State> outputs = group.getOutputs();
if (cmd.getValue() instanceof Commands.Move) {
State input = inputs.get(0);
requireThat(require -> {
require.using("the transaction is signed by the owner of the CP", cmd.getSigners().contains(input.getOwner().getOwningKey()));
require.using("the state is propagated", outputs.size() == 1);
// Don't need to check anything else, as if outputs.size == 1 then the output is equal to
// the input ignoring the owner field due to the grouping.
return null;
});
} else if (cmd.getValue() instanceof Commands.Redeem) {
// Redemption of the paper requires movement of on-ledger cash.
State input = inputs.get(0);
Amount<Issued<Currency>> received = sumCashBy(tx.getOutputStates(), input.getOwner());
if (timeWindow == null) throw new IllegalArgumentException("Redemptions must be timestamped");
Instant time = timeWindow.getFromTime();
requireThat(require -> {
require.using("the paper must have matured", time.isAfter(input.getMaturityDate()));
require.using("the received amount equals the face value", received == input.getFaceValue());
require.using("the paper must be destroyed", outputs.isEmpty());
require.using("the transaction is signed by the owner of the CP", cmd.getSigners().contains(input.getOwner().getOwningKey()));
return null;
});
} else if (cmd.getValue() instanceof Commands.Issue) {
State output = outputs.get(0);
if (timeWindow == null) throw new IllegalArgumentException("Issuances must have a time-window");
Instant time = timeWindow.getUntilTime();
requireThat(require -> {
// Don't allow people to issue commercial paper under other entities identities.
require.using("output states are issued by a command signer", cmd.getSigners().contains(output.getIssuance().getParty().getOwningKey()));
require.using("output values sum to more than the inputs", output.getFaceValue().getQuantity() > 0);
require.using("the maturity date is not in the past", time.isBefore(output.getMaturityDate()));
// Don't allow an existing CP state to be replaced by this issuance.
require.using("can't reissue an existing state", inputs.isEmpty());
return null;
});
} else {
throw new IllegalArgumentException("Unrecognised command");
}
}
This loop is the core logic of the contract.
The first line simply gets the time-window out of the transaction. Setting a time-window in transactions is optional, so a time may be missing here. We check for it being null later.
警告
In the Kotlin version as long as we write a comparison with the transaction time first the compiler will
verify we didn’t forget to check if it’s missing. Unfortunately due to the need for smooth interoperability with Java, this
check won’t happen if we write e.g. someDate > time
, it has to be time < someDate
. So it’s good practice to
always write the transaction time-window first.
Next, we take one of three paths, depending on what the type of the command object is.
If the command is a ``Move`` command:
The first line (first three lines in Java) impose a requirement that there be a single piece of commercial paper in
this group. We do not allow multiple units of CP to be split or merged even if they are owned by the same owner. The
single()
method is a static extension method defined by the Kotlin standard library: given a list, it throws an
exception if the list size is not 1, otherwise it returns the single item in that list. In Java, this appears as a
regular static method of the type familiar from many FooUtils type singleton classes and we have statically imported it
here. In Kotlin, it appears as a method that can be called on any JDK list. The syntax is slightly different but
behind the scenes, the code compiles to the same bytecode.
Next, we check that the transaction was signed by the public key that’s marked as the current owner of the commercial
paper. Because the platform has already verified all the digital signatures before the contract begins execution,
all we have to do is verify that the owner’s public key was one of the keys that signed the transaction. The Java code
is straightforward: we are simply using the Preconditions.checkState
method from Guava. The Kotlin version looks a
little odd: we have a requireThat construct that looks like it’s built into the language. In fact requireThat is an
ordinary function provided by the platform’s contract API. Kotlin supports the creation of domain specific languages
through the intersection of several features of the language, and we use it here to support the natural listing of
requirements. To see what it compiles down to, look at the Java version. Each "string" using (expression)
statement
inside a requireThat
turns into an assertion that the given expression is true, with an IllegalArgumentException
being thrown that contains the string if not. It’s just another way to write out a regular assertion, but with the
English-language requirement being put front and center.
Next, we simply verify that the output state is actually present: a move is not allowed to delete the CP from the ledger. The grouping logic already ensured that the details are identical and haven’t been changed, save for the public key of the owner.
If the command is a ``Redeem`` command, then the requirements are more complex:
- We still check there is a CP input state.
- We want to see that the face value of the CP is being moved as a cash claim against some party, that is, the issuer of the CP is really paying back the face value.
- The transaction must be happening after the maturity date.
- The commercial paper must not be propagated by this transaction: it must be deleted, by the group having no output state. This prevents the same CP being considered redeemable multiple times.
To calculate how much cash is moving, we use the sumCashBy
utility function. Again, this is an extension function,
so in Kotlin code it appears as if it was a method on the List<Cash.State>
type even though JDK provides no such
method. In Java we see its true nature: it is actually a static method named CashKt.sumCashBy
. This method simply
returns an Amount
object containing the sum of all the cash states in the transaction outputs that are owned by
that given public key, or throws an exception if there were no such states or if there were different currencies
represented in the outputs! So we can see that this contract imposes a limitation on the structure of a redemption
transaction: you are not allowed to move currencies in the same transaction that the CP does not involve. This
limitation could be addressed with better APIs, if it were to be a real limitation.
Finally, we support an ``Issue`` command, to create new instances of commercial paper on the ledger.
It likewise enforces various invariants upon the issuance, such as, there must be one output CP state, for instance.
This contract is simple and does not implement all the business logic a real commercial paper lifecycle management program would. For instance, there is no logic requiring a signature from the issuer for redemption: it is assumed that any transfer of money that takes place at the same time as redemption is good enough. Perhaps that is something that should be tightened. Likewise, there is no logic handling what happens if the issuer has gone bankrupt, if there is a dispute, and so on.
As the prototype evolves, these requirements will be explored and this tutorial updated to reflect improvements in the contracts API.
How to test your contract¶
Of course, it is essential to unit test your new nugget of business logic to ensure that it behaves as you expect. As contract code is just a regular Java function you could write out the logic entirely by hand in the usual manner. But this would be inconvenient, and then you’d get bored of writing tests and that would be bad: you might be tempted to skip a few.
To make contract testing more convenient Corda provides a language-like API for both Kotlin and Java that lets you easily construct chains of transactions and verify that they either pass validation, or fail with a particular error message.
Testing contracts with this domain specific language is covered in the separate tutorial, Writing a contract test.
Adding a generation API to your contract¶
Contract classes must provide a verify function, but they may optionally also provide helper functions to simplify their usage. A simple class of functions most contracts provide are generation functions, which either create or modify a transaction to perform certain actions (an action is normally mappable 1:1 to a command, but doesn’t have to be so).
Generation may involve complex logic. For example, the cash contract has a generateSpend
method that is given a set of
cash states and chooses a way to combine them together to satisfy the amount of money that is being sent. In the
immutable-state model that we are using ledger entries (states) can only be created and deleted, but never modified.
Therefore to send $1200 when we have only $900 and $500 requires combining both states together, and then creating
two new output states of $1200 and $200 back to ourselves. This latter state is called the change and is a concept
that should be familiar to anyone who has worked with Bitcoin.
As another example, we can imagine code that implements a netting algorithm may generate complex transactions that must be signed by many people. Whilst such code might be too big for a single utility method (it’d probably be sized more like a module), the basic concept is the same: preparation of a transaction using complex logic.
For our commercial paper contract however, the things that can be done with it are quite simple. Let’s start with a method to wrap up the issuance process:
fun generateIssue(issuance: PartyAndReference, faceValue: Amount<Issued<Currency>>, maturityDate: Instant,
notary: Party): TransactionBuilder {
val state = State(issuance, issuance.party, faceValue, maturityDate)
val stateAndContract = StateAndContract(state, CP_PROGRAM_ID)
return TransactionBuilder(notary = notary).withItems(stateAndContract, Command(Commands.Issue(), issuance.party.owningKey))
}
We take a reference that points to the issuing party (i.e. the caller) and which can contain any internal
bookkeeping/reference numbers that we may require. The reference field is an ideal place to put (for example) a
join key. Then the face value of the paper, and the maturity date. It returns a TransactionBuilder
.
A TransactionBuilder
is one of the few mutable classes the platform provides. It allows you to add inputs,
outputs and commands to it and is designed to be passed around, potentially between multiple contracts.
注解
Generation methods should ideally be written to compose with each other, that is, they should take a
TransactionBuilder
as an argument instead of returning one, unless you are sure it doesn’t make sense to
combine this type of transaction with others. In this case, issuing CP at the same time as doing other things
would just introduce complexity that isn’t likely to be worth it, so we return a fresh object each time: instead,
an issuer should issue the CP (starting out owned by themselves), and then sell it in a separate transaction.
The function we define creates a CommercialPaper.State
object that mostly just uses the arguments we were given,
but it fills out the owner field of the state to be the same public key as the issuing party.
We then combine the CommercialPaper.State
object with a reference to the CommercialPaper
contract, which is
defined inside the contract itself
companion object {
const val CP_PROGRAM_ID: ContractClassName = "net.corda.finance.contracts.CommercialPaper"
}
public static final String IOU_CONTRACT_ID = "com.example.contract.IOUContract";
This value, which is the fully qualified class name of the contract, tells the Corda platform where to find the contract code that should be used to validate a transaction containing an output state of this contract type. Typically the contract code will be included in the transaction as an attachment (see Using attachments).
The returned partial transaction has a Command
object as a parameter. This is a container for any object
that implements the CommandData
interface, along with a list of keys that are expected to sign this transaction. In this case,
issuance requires that the issuing party sign, so we put the key of the party there.
The TransactionBuilder
has a convenience withItems
method that takes a variable argument list. You can pass in
any StateAndRef
(input), StateAndContract
(output) or Command
objects and it’ll build up the transaction
for you.
There’s one final thing to be aware of: we ask the caller to select a notary that controls this state and prevents it from being double spent. You can learn more about this topic in the Notaries article.
注解
For now, don’t worry about how to pick a notary. More infrastructure will come later to automate this decision for you.
What about moving the paper, i.e. reassigning ownership to someone else?
fun generateMove(tx: TransactionBuilder, paper: StateAndRef<State>, newOwner: AbstractParty) {
tx.addInputState(paper)
val outputState = paper.state.data.withNewOwner(newOwner).ownableState
tx.addOutputState(outputState, CP_PROGRAM_ID)
tx.addCommand(Command(Commands.Move(), paper.state.data.owner.owningKey))
}
Here, the method takes a pre-existing TransactionBuilder
and adds to it. This is correct because typically
you will want to combine a sale of CP atomically with the movement of some other asset, such as cash. So both
generate methods should operate on the same transaction. You can see an example of this being done in the unit tests
for the commercial paper contract.
The paper is given to us as a StateAndRef<CommercialPaper.State>
object. This is exactly what it sounds like:
a small object that has a (copy of a) state object, and also the (txhash, index)
that indicates the location of this
state on the ledger.
We add the existing paper state as an input, the same paper state with the owner field adjusted as an output, and finally a move command that has the old owner’s public key: this is what forces the current owner’s signature to be present on the transaction, and is what’s checked by the contract.
Finally, we can do redemption.
@Throws(InsufficientBalanceException::class)
fun generateRedeem(tx: TransactionBuilder, paper: StateAndRef<State>, services: ServiceHub) {
// Add the cash movement using the states in our vault.
CashUtils.generateSpend(
services = services,
tx = tx,
amount = paper.state.data.faceValue.withoutIssuer(),
ourIdentity = services.myInfo.singleIdentityAndCert(),
to = paper.state.data.owner
)
tx.addInputState(paper)
tx.addCommand(Command(Commands.Redeem(), paper.state.data.owner.owningKey))
}
Here we can see an example of composing contracts together. When an owner wishes to redeem the commercial paper, the issuer (i.e. the caller) must gather cash from its vault and send the face value to the owner of the paper.
注解
This contract has no explicit concept of rollover.
The vault is a concept that may be familiar from Bitcoin and Ethereum. It is simply a set of states (such as cash) that are owned by the caller. Here, we use the vault to update the partial transaction we are handed with a movement of cash from the issuer of the commercial paper to the current owner. If we don’t have enough quantity of cash in our vault, an exception is thrown. Then we add the paper itself as an input, but, not an output (as we wish to remove it from the ledger). Finally, we add a Redeem command that should be signed by the owner of the commercial paper.
警告
The amount we pass to the Cash.generateSpend
function has to be treated first with withoutIssuer
.
This reflects the fact that the way we handle issuer constraints is still evolving; the commercial paper
contract requires payment in the form of a currency issued by a specific party (e.g. the central bank,
or the issuers own bank perhaps). But the vault wants to assemble spend transactions using cash states from
any issuer, thus we must strip it here. This represents a design mismatch that we will resolve in future
versions with a more complete way to express issuer constraints.
A TransactionBuilder
is not by itself ready to be used anywhere, so first, we must convert it to something that
is recognised by the network. The most important next step is for the participating entities to sign it. Typically,
an initiating flow will create an initial partially signed SignedTransaction
by calling the serviceHub.toSignedTransaction
method.
Then the frozen SignedTransaction
can be passed to other nodes by the flow, these can sign using serviceHub.createSignature
and distribute.
The CollectSignaturesFlow
provides a generic implementation of this process that can be used as a subFlow
.
You can see how transactions flow through the different stages of construction by examining the commercial paper unit tests.
How multi-party transactions are constructed and transmitted¶
OK, so now we know how to define the rules of the ledger, and we know how to construct transactions that satisfy those rules … and if all we were doing was maintaining our own data that might be enough. But we aren’t: Corda is about keeping many different parties all in sync with each other.
In a classical blockchain system all data is transmitted to everyone and if you want to do something fancy, like a multi-party transaction, you’re on your own. In Corda data is transmitted only to parties that need it and multi-party transactions are a way of life, so we provide lots of support for managing them.
You can learn how transactions are moved between peers and taken through the build-sign-notarise-broadcast process in a separate tutorial, Writing flows.
Non-asset-oriented smart contracts¶
Although this tutorial covers how to implement an owned asset, there is no requirement that states and code contracts must be concerned with ownership of an asset. It is better to think of states as representing useful facts about the world, and (code) contracts as imposing logical relations on how facts combine to produce new facts. Alternatively you can imagine that states are like rows in a relational database and contracts are like stored procedures and relational constraints.
When writing a contract that handles deal-like entities rather than asset-like entities, you may wish to refer to “Interest rate swaps” and the accompanying source code. Whilst all the concepts are the same, deals are typically not splittable or mergeable and thus you don’t have to worry much about grouping of states.
Making things happen at a particular time¶
It would be nice if you could program your node to automatically redeem your commercial paper as soon as it matures. Corda provides a way for states to advertise scheduled events that should occur in future. Whilst this information is by default ignored, if the corresponding Cordapp is installed and active in your node, and if the state is considered relevant by your vault (e.g. because you own it), then the node can automatically begin the process of creating a transaction and taking it through the life cycle. You can learn more about this in the article “Event scheduling”.
Encumbrances¶
All contract states may be encumbered by up to one other state, which we call an encumbrance.
The encumbrance state, if present, forces additional controls over the encumbered state, since the encumbrance state contract will also be verified during the execution of the transaction. For example, a contract state could be encumbered with a time-lock contract state; the state is then only processable in a transaction that verifies that the time specified in the encumbrance time-lock has passed.
The encumbered state refers to its encumbrance by index, and the referred encumbrance state is an output state in a particular position on the same transaction that created the encumbered state. Note that an encumbered state that is being consumed must have its encumbrance consumed in the same transaction, otherwise the transaction is not valid.
The encumbrance reference is optional in the ContractState
interface:
val encumbrance: Int? get() = null
@Nullable
@Override
public Integer getEncumbrance() {
return null;
}
The time-lock contract mentioned above can be implemented very simply:
class TestTimeLock : Contract {
...
override fun verify(tx: LedgerTransaction) {
val time = tx.timeWindow?.untilTime ?: throw IllegalStateException(...)
...
requireThat {
"the time specified in the time-lock has passed" by
(time >= tx.inputs.filterIsInstance<TestTimeLock.State>().single().validFrom)
}
}
...
}
We can then set up an encumbered state:
val encumberedState = Cash.State(amount = 1000.DOLLARS `issued by` defaultIssuer, owner = DUMMY_PUBKEY_1, encumbrance = 1)
val fourPmTimelock = TestTimeLock.State(Instant.parse("2015-04-17T16:00:00.00Z"))
When we construct a transaction that generates the encumbered state, we must place the encumbrance in the corresponding output position of that transaction. And when we subsequently consume that encumbered state, the same encumbrance state must be available somewhere within the input set of states.
In future, we will consider the concept of a covenant. This is where the encumbrance travels alongside each iteration of the encumbered state. For example, a cash state may be encumbered with a domicile encumbrance, which checks the domicile of the identity of the owner that the cash state is being moved to, in order to uphold sanction screening regulations, and prevent cash being paid to parties domiciled in e.g. North Korea. In this case, the encumbrance should be permanently attached to the all future cash states stemming from this one.
We will also consider marking states that are capable of being encumbrances as such. This will prevent states being used as encumbrances inadvertently. For example, the time-lock above would be usable as an encumbrance, but it makes no sense to be able to encumber a cash state with another one.
Writing a contract test¶
This tutorial will take you through the steps required to write a contract test using Kotlin and Java.
The testing DSL allows one to define a piece of the ledger with transactions referring to each other, and ways of verifying their correctness.
Testing single transactions¶
We start with the empty ledger:
class CommercialPaperTest{
@Test
fun emptyLedger() {
ledger {
}
}
...
}
import org.junit.Test;
import static net.corda.testing.NodeTestUtils.ledger;
public class CommercialPaperTest {
@Test
public void emptyLedger() {
ledger(l -> {
return null;
});
}
}
The DSL keyword ledger
takes a closure that can build up several transactions and may verify their overall
correctness. A ledger is effectively a fresh world with no pre-existing transactions or services within it.
We will start with defining helper function that returns a CommercialPaper
state:
val bigCorp = TestIdentity((CordaX500Name("BigCorp", "New York", "GB")))
private static final TestIdentity bigCorp = new TestIdentity(new CordaX500Name("BigCorp", "New York", "GB"));
It’s a CommercialPaper
issued by MEGA_CORP
with face value of $1000 and maturity date in 7 days.
Let’s add a CommercialPaper
transaction:
@Test
fun simpleCPDoesntCompile() {
val inState = getPaper()
ledger {
transaction {
input(CommercialPaper.CP_PROGRAM_ID) { inState }
}
}
}
@Test
public void simpleCPDoesntCompile() {
ICommercialPaperState inState = getPaper();
ledger(l -> {
l.transaction(tx -> {
tx.input(inState);
});
return Unit.INSTANCE;
});
}
We can add a transaction to the ledger using the transaction
primitive. The transaction in turn may be defined by
specifying input
s, output
s, command
s and attachment
s.
The above input
call is a bit special; transactions don’t actually contain input states, just references
to output states of other transactions. Under the hood the above input
call creates a dummy transaction in the
ledger (that won’t be verified) which outputs the specified state, and references that from this transaction.
The above code however doesn’t compile:
Error:(29, 17) Kotlin: Type mismatch: inferred type is Unit but EnforceVerifyOrFail was expected
Error:(35, 27) java: incompatible types: bad return type in lambda expression missing return value
This is deliberate: The DSL forces us to specify either verifies()
or `fails with`("some text")
on the
last line of transaction
:
// This example test will fail with this exception.
@Test(expected = IllegalStateException::class)
fun simpleCP() {
val inState = getPaper()
ledgerServices.ledger(dummyNotary.party) {
transaction {
attachments(CP_PROGRAM_ID)
input(CP_PROGRAM_ID, inState)
verifies()
}
}
}
// This example test will fail with this exception.
@Test(expected = IllegalStateException.class)
public void simpleCP() {
ICommercialPaperState inState = getPaper();
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.attachments(JCP_PROGRAM_ID);
tx.input(JCP_PROGRAM_ID, inState);
return tx.verifies();
});
return Unit.INSTANCE;
});
}
Let’s take a look at a transaction that fails.
// This example test will fail with this exception.
@Test(expected = TransactionVerificationException.ContractRejection::class)
fun simpleCPMove() {
val inState = getPaper()
ledgerServices.ledger(dummyNotary.party) {
transaction {
input(CP_PROGRAM_ID, inState)
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
attachments(CP_PROGRAM_ID)
verifies()
}
}
}
// This example test will fail with this exception.
@Test(expected = TransactionVerificationException.ContractRejection.class)
public void simpleCPMove() {
ICommercialPaperState inState = getPaper();
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.input(JCP_PROGRAM_ID, inState);
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
tx.attachments(JCP_PROGRAM_ID);
return tx.verifies();
});
return Unit.INSTANCE;
});
}
When run, that code produces the following error:
net.corda.core.contracts.TransactionVerificationException$ContractRejection: java.lang.IllegalArgumentException: Failed requirement: the state is propagated
net.corda.core.contracts.TransactionVerificationException$ContractRejection: java.lang.IllegalStateException: the state is propagated
The transaction verification failed, because we wanted to move paper but didn’t specify an output - but the state should be propagated.
However we can specify that this is an intended behaviour by changing verifies()
to `fails with`("the state is propagated")
:
@Test
fun simpleCPMoveFails() {
val inState = getPaper()
ledgerServices.ledger(dummyNotary.party) {
transaction {
input(CP_PROGRAM_ID, inState)
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
attachments(CP_PROGRAM_ID)
`fails with`("the state is propagated")
}
}
}
@Test
public void simpleCPMoveFails() {
ICommercialPaperState inState = getPaper();
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.input(JCP_PROGRAM_ID, inState);
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
tx.attachments(JCP_PROGRAM_ID);
return tx.failsWith("the state is propagated");
});
return Unit.INSTANCE;
});
}
We can continue to build the transaction until it verifies
:
@Test
fun simpleCPMoveFailureAndSuccess() {
val inState = getPaper()
ledgerServices.ledger(dummyNotary.party) {
transaction {
input(CP_PROGRAM_ID, inState)
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
attachments(CP_PROGRAM_ID)
`fails with`("the state is propagated")
output(CP_PROGRAM_ID, "alice's paper", inState.withOwner(alice.party))
verifies()
}
}
}
@Test
public void simpleCPMoveSuccessAndFailure() {
ICommercialPaperState inState = getPaper();
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.input(JCP_PROGRAM_ID, inState);
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
tx.attachments(JCP_PROGRAM_ID);
tx.failsWith("the state is propagated");
tx.output(JCP_PROGRAM_ID, "alice's paper", inState.withOwner(alice.getParty()));
return tx.verifies();
});
return Unit.INSTANCE;
});
}
output
specifies that we want the input state to be transferred to ALICE
and command
adds the
Move
command itself, signed by the current owner of the input state, MEGA_CORP_PUBKEY
.
We constructed a complete signed commercial paper transaction and verified it. Note how we left in the fails with
line - this is fine, the failure will be tested on the partially constructed transaction.
What should we do if we wanted to test what happens when the wrong party signs the transaction? If we simply add a
command
it will permanently ruin the transaction… Enter tweak
:
@Test
fun `simple issuance with tweak`() {
ledgerServices.ledger(dummyNotary.party) {
transaction {
output(CP_PROGRAM_ID, "paper", getPaper()) // Some CP is issued onto the ledger by MegaCorp.
attachments(CP_PROGRAM_ID)
tweak {
// The wrong pubkey.
command(bigCorp.publicKey, CommercialPaper.Commands.Issue())
timeWindow(TEST_TX_TIME)
`fails with`("output states are issued by a command signer")
}
command(megaCorp.publicKey, CommercialPaper.Commands.Issue())
timeWindow(TEST_TX_TIME)
verifies()
}
}
}
@Test
public void simpleIssuanceWithTweak() {
ledger(ledgerServices, l -> {
l.transaction(tx -> {
tx.output(JCP_PROGRAM_ID, "paper", getPaper()); // Some CP is issued onto the ledger by MegaCorp.
tx.attachments(JCP_PROGRAM_ID);
tx.tweak(tw -> {
tw.command(bigCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tw.timeWindow(TEST_TX_TIME);
return tw.failsWith("output states are issued by a command signer");
});
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tx.timeWindow(TEST_TX_TIME);
return tx.verifies();
});
return Unit.INSTANCE;
});
}
tweak
creates a local copy of the transaction. This makes possible to locally “ruin” the transaction while not
modifying the original one, allowing testing of different error conditions.
We now have a neat little test that tests a single transaction. This is already useful, and in fact testing of a single
transaction in this way is very common. There is even a shorthand top-level transaction
primitive that creates a
ledger with a single transaction:
@Test
fun `simple issuance with tweak and top level transaction`() {
ledgerServices.transaction(dummyNotary.party) {
output(CP_PROGRAM_ID, "paper", getPaper()) // Some CP is issued onto the ledger by MegaCorp.
attachments(CP_PROGRAM_ID)
tweak {
// The wrong pubkey.
command(bigCorp.publicKey, CommercialPaper.Commands.Issue())
timeWindow(TEST_TX_TIME)
`fails with`("output states are issued by a command signer")
}
command(megaCorp.publicKey, CommercialPaper.Commands.Issue())
timeWindow(TEST_TX_TIME)
verifies()
}
}
@Test
public void simpleIssuanceWithTweakTopLevelTx() {
transaction(ledgerServices, tx -> {
tx.output(JCP_PROGRAM_ID, "paper", getPaper()); // Some CP is issued onto the ledger by MegaCorp.
tx.attachments(JCP_PROGRAM_ID);
tx.tweak(tw -> {
tw.command(bigCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tw.timeWindow(TEST_TX_TIME);
return tw.failsWith("output states are issued by a command signer");
});
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tx.timeWindow(TEST_TX_TIME);
return tx.verifies();
});
}
Chaining transactions¶
Now that we know how to define a single transaction, let’s look at how to define a chain of them:
@Test
fun `chain commercial paper`() {
val issuer = megaCorp.party.ref(123)
ledgerServices.ledger(dummyNotary.party) {
unverifiedTransaction {
attachments(Cash.PROGRAM_ID)
output(Cash.PROGRAM_ID, "alice's $900", 900.DOLLARS.CASH issuedBy issuer ownedBy alice.party)
}
// Some CP is issued onto the ledger by MegaCorp.
transaction("Issuance") {
output(CP_PROGRAM_ID, "paper", getPaper())
command(megaCorp.publicKey, CommercialPaper.Commands.Issue())
attachments(CP_PROGRAM_ID)
timeWindow(TEST_TX_TIME)
verifies()
}
transaction("Trade") {
input("paper")
input("alice's $900")
output(Cash.PROGRAM_ID, "borrowed $900", 900.DOLLARS.CASH issuedBy issuer ownedBy megaCorp.party)
output(CP_PROGRAM_ID, "alice's paper", "paper".output<ICommercialPaperState>().withOwner(alice.party))
command(alice.publicKey, Cash.Commands.Move())
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
verifies()
}
}
}
@Test
public void chainCommercialPaper() {
PartyAndReference issuer = megaCorp.ref(defaultRef);
ledger(ledgerServices, l -> {
l.unverifiedTransaction(tx -> {
tx.output(Cash.PROGRAM_ID, "alice's $900",
new Cash.State(issuedBy(DOLLARS(900), issuer), alice.getParty()));
tx.attachments(Cash.PROGRAM_ID);
return Unit.INSTANCE;
});
// Some CP is issued onto the ledger by MegaCorp.
l.transaction("Issuance", tx -> {
tx.output(JCP_PROGRAM_ID, "paper", getPaper());
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tx.attachments(JCP_PROGRAM_ID);
tx.timeWindow(TEST_TX_TIME);
return tx.verifies();
});
l.transaction("Trade", tx -> {
tx.input("paper");
tx.input("alice's $900");
tx.output(Cash.PROGRAM_ID, "borrowed $900", new Cash.State(issuedBy(DOLLARS(900), issuer), megaCorp.getParty()));
JavaCommercialPaper.State inputPaper = l.retrieveOutput(JavaCommercialPaper.State.class, "paper");
tx.output(JCP_PROGRAM_ID, "alice's paper", inputPaper.withOwner(alice.getParty()));
tx.command(alice.getPublicKey(), new Cash.Commands.Move());
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
return tx.verifies();
});
return Unit.INSTANCE;
});
}
In this example we declare that ALICE
has $900 but we don’t care where from. For this we can use
unverifiedTransaction
. Note how we don’t need to specify verifies()
.
Notice that we labelled output with "alice's $900"
, also in transaction named "Issuance"
we labelled a commercial paper with "paper"
. Now we can subsequently refer to them in other transactions, e.g.
by input("alice's $900")
or "paper".output<ICommercialPaperState>()
.
The last transaction named "Trade"
exemplifies simple fact of selling the CommercialPaper
to Alice for her $900,
$100 less than the face value at 10% interest after only 7 days.
We can also test whole ledger calling verifies()
and fails()
on the ledger level.
To do so let’s create a simple example that uses the same input twice:
@Test
fun `chain commercial paper double spend`() {
val issuer = megaCorp.party.ref(123)
ledgerServices.ledger(dummyNotary.party) {
unverifiedTransaction {
attachments(Cash.PROGRAM_ID)
output(Cash.PROGRAM_ID, "alice's $900", 900.DOLLARS.CASH issuedBy issuer ownedBy alice.party)
}
// Some CP is issued onto the ledger by MegaCorp.
transaction("Issuance") {
output(CP_PROGRAM_ID, "paper", getPaper())
command(megaCorp.publicKey, CommercialPaper.Commands.Issue())
attachments(CP_PROGRAM_ID)
timeWindow(TEST_TX_TIME)
verifies()
}
transaction("Trade") {
input("paper")
input("alice's $900")
output(Cash.PROGRAM_ID, "borrowed $900", 900.DOLLARS.CASH issuedBy issuer ownedBy megaCorp.party)
output(CP_PROGRAM_ID, "alice's paper", "paper".output<ICommercialPaperState>().withOwner(alice.party))
command(alice.publicKey, Cash.Commands.Move())
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
verifies()
}
transaction {
input("paper")
// We moved a paper to another pubkey.
output(CP_PROGRAM_ID, "bob's paper", "paper".output<ICommercialPaperState>().withOwner(bob.party))
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
verifies()
}
fails()
}
}
@Test
public void chainCommercialPaperDoubleSpend() {
PartyAndReference issuer = megaCorp.ref(defaultRef);
ledger(ledgerServices, l -> {
l.unverifiedTransaction(tx -> {
tx.output(Cash.PROGRAM_ID, "alice's $900",
new Cash.State(issuedBy(DOLLARS(900), issuer), alice.getParty()));
tx.attachments(Cash.PROGRAM_ID);
return Unit.INSTANCE;
});
// Some CP is issued onto the ledger by MegaCorp.
l.transaction("Issuance", tx -> {
tx.output(JCP_PROGRAM_ID, "paper", getPaper());
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tx.attachments(JCP_PROGRAM_ID);
tx.timeWindow(TEST_TX_TIME);
return tx.verifies();
});
l.transaction("Trade", tx -> {
tx.input("paper");
tx.input("alice's $900");
tx.output(Cash.PROGRAM_ID, "borrowed $900", new Cash.State(issuedBy(DOLLARS(900), issuer), megaCorp.getParty()));
JavaCommercialPaper.State inputPaper = l.retrieveOutput(JavaCommercialPaper.State.class, "paper");
tx.output(JCP_PROGRAM_ID, "alice's paper", inputPaper.withOwner(alice.getParty()));
tx.command(alice.getPublicKey(), new Cash.Commands.Move());
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
return tx.verifies();
});
l.transaction(tx -> {
tx.input("paper");
JavaCommercialPaper.State inputPaper = l.retrieveOutput(JavaCommercialPaper.State.class, "paper");
// We moved a paper to other pubkey.
tx.output(JCP_PROGRAM_ID, "bob's paper", inputPaper.withOwner(bob.getParty()));
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
return tx.verifies();
});
l.fails();
return Unit.INSTANCE;
});
}
The transactions verifies()
individually, however the state was spent twice! That’s why we need the global ledger
verification (fails()
at the end). As in previous examples we can use tweak
to create a local copy of the whole ledger:
@Test
fun `chain commercial tweak`() {
val issuer = megaCorp.party.ref(123)
ledgerServices.ledger(dummyNotary.party) {
unverifiedTransaction {
attachments(Cash.PROGRAM_ID)
output(Cash.PROGRAM_ID, "alice's $900", 900.DOLLARS.CASH issuedBy issuer ownedBy alice.party)
}
// Some CP is issued onto the ledger by MegaCorp.
transaction("Issuance") {
output(CP_PROGRAM_ID, "paper", getPaper())
command(megaCorp.publicKey, CommercialPaper.Commands.Issue())
attachments(CP_PROGRAM_ID)
timeWindow(TEST_TX_TIME)
verifies()
}
transaction("Trade") {
input("paper")
input("alice's $900")
output(Cash.PROGRAM_ID, "borrowed $900", 900.DOLLARS.CASH issuedBy issuer ownedBy megaCorp.party)
output(CP_PROGRAM_ID, "alice's paper", "paper".output<ICommercialPaperState>().withOwner(alice.party))
command(alice.publicKey, Cash.Commands.Move(CommercialPaper::class.java))
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
verifies()
}
tweak {
transaction {
input("paper")
// We moved a paper to another pubkey.
output(CP_PROGRAM_ID, "bob's paper", "paper".output<ICommercialPaperState>().withOwner(bob.party))
command(megaCorp.publicKey, CommercialPaper.Commands.Move())
verifies()
}
fails()
}
verifies()
}
}
@Test
public void chainCommercialPaperTweak() {
PartyAndReference issuer = megaCorp.ref(defaultRef);
ledger(ledgerServices, l -> {
l.unverifiedTransaction(tx -> {
tx.output(Cash.PROGRAM_ID, "alice's $900",
new Cash.State(issuedBy(DOLLARS(900), issuer), alice.getParty()));
tx.attachments(Cash.PROGRAM_ID);
return Unit.INSTANCE;
});
// Some CP is issued onto the ledger by MegaCorp.
l.transaction("Issuance", tx -> {
tx.output(JCP_PROGRAM_ID, "paper", getPaper());
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Issue());
tx.attachments(JCP_PROGRAM_ID);
tx.timeWindow(TEST_TX_TIME);
return tx.verifies();
});
l.transaction("Trade", tx -> {
tx.input("paper");
tx.input("alice's $900");
tx.output(Cash.PROGRAM_ID, "borrowed $900", new Cash.State(issuedBy(DOLLARS(900), issuer), megaCorp.getParty()));
JavaCommercialPaper.State inputPaper = l.retrieveOutput(JavaCommercialPaper.State.class, "paper");
tx.output(JCP_PROGRAM_ID, "alice's paper", inputPaper.withOwner(alice.getParty()));
tx.command(alice.getPublicKey(), new Cash.Commands.Move(JavaCommercialPaper.class));
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
return tx.verifies();
});
l.tweak(lw -> {
lw.transaction(tx -> {
tx.input("paper");
JavaCommercialPaper.State inputPaper = l.retrieveOutput(JavaCommercialPaper.State.class, "paper");
// We moved a paper to another pubkey.
tx.output(JCP_PROGRAM_ID, "bob's paper", inputPaper.withOwner(bob.getParty()));
tx.command(megaCorp.getPublicKey(), new JavaCommercialPaper.Commands.Move());
return tx.verifies();
});
lw.fails();
return Unit.INSTANCE;
});
l.verifies();
return Unit.INSTANCE;
});
}
升级合约¶
While every care is taken in development of contract code, inevitably upgrades will be required to fix bugs (in either
design or implementation). Upgrades can involve a substitution of one version of the contract code for another or
changing to a different contract that understands how to migrate the existing state objects. When state objects are
added as outputs to transactions, they are linked to the contract code they are intended for via the
StateAndContract
type. Changing a state’s contract only requires substituting one ContractClassName
for another.
虽然在开发合约代码的过程中我们会非常小心,但是由于解决 bug(设计或者实施)升级还是不可避免的。升级可能会涉及到将合约代码从一个版本升级到另一个版本,或者变成另一个不同的合约,这个合约知道应该如何升级已有的 state 对象。当 state 对象作为 outputs 被添加到 transactions 的时候,通过 StateAndContract
类型,他们被连关联到他们要用的合约代码。修改 state 的合约代码只需要将 ContractClassName
替换成另外的名字。
Workflow¶
Here’s the workflow for contract upgrades:
- Banks A and B negotiate a trade, off-platform
- Banks A and B execute a flow to construct a state object representing the trade, using contract X, and include it in a transaction (which is then signed and sent to the consensus service)
- Time passes
- The developer of contract X discovers a bug in the contract code, and releases a new version, contract Y. The developer will then notify all existing users (e.g. via a mailing list or CorDapp store) to stop their nodes from issuing further states with contract X
- Banks A and B review the new contract via standard change control processes and identify the contract states they agree to upgrade (they may decide not to upgrade some contract states as these might be needed for some other obligation contract)
- Banks A and B instruct their Corda nodes (via RPC) to be willing to upgrade state objects with contract X to state objects with contract Y using the agreed upgrade path
- One of the parties (the
Initiator
) initiates a flow to replace state objects referring to contract X with new state objects referring to contract Y - A proposed transaction (the
Proposal
), with the old states as input and the reissued states as outputs, is created and signed with the node’s private key - The
Initiator
node sends the proposed transaction, along with details of the new contract upgrade path that it is proposing, to all participants of the state object - Each counterparty (the
Acceptor
s) verifies the proposal, signs or rejects the state reissuance accordingly, and sends a signature or rejection notification back to the initiating node - If signatures are received from all parties, the
Initiator
assembles the complete signed transaction and sends it to the notary
下边是升级合约的流程:
- Bank A 和 B 在线下协商了一笔交易
- Bank A 和 B 执行了一个 flow 来构建了一个 state 对象来表示这笔交易,使用的是合约 X,并将它放到了一个 transaction 中(接下来会被签名并发送到共识服务,即 Notary Service)
- 一段时间过去了
- 合约 X 的开发者发现了合约代码中的一个 bug,然后就发布了一个新的版本,合约 Y。开发者接下来会通知所有已有的用户(比如使用一个邮件列表或者 CorDapp store)来停止他们的节点继续用合约 X 来初始新的 states 对象
- Bank A 和 B 通过标准的改动控制流程(change control process)来 review 新的合约代码并且识别他们同意升级的合约 states(他们可能会选择不升级某些合约 states,因为这些 states 可能被其他的一些债务合约在使用)
- Bank A 和 B 会告诉他们的节点(通过 RPC)将会使用双方同意的更新途径来将使用合约 X 的 state 对象更新到使用合约 Y 的 state 对象
- 相关方的某一方(
Initiator
)会初始一个 flow 来将使用合约 X 的 state 对象替换为新的使用合约 Y 的 state 对象 - 一个将老的 states 作为 input,将新的 states 作为 output 的 transaction(
Proposal
)被提出并且被使用节点的私钥进行了签名。 - 初始(
Initiator
)节点将这个 transaction 和关于如何更新到新的合约的途径详细信息发送给 state 对象的所有参与者。 - 每个合作方(
Acceptor
)验证这个提议,签名或者拒绝这个重新生成 state 的提议,然后将签名或者拒绝的结果发送给初始节点 - 如果从所有相关节点都收到了签名的话,
Initiator
节点会整理这个完全被签署过的 transaction 并且发送给 notary
批准一个更新¶
Each of the participants in the state for which the contract is being upgraded will have to instruct their node that
they agree to the upgrade before the upgrade can take place. The ContractUpgradeFlow
is used to manage the
authorisation process. Each node administrator can use RPC to trigger either an Authorise
or a Deauthorise
flow
for the state in question.
在升级可以开始之前,将要被升级的 state 中的参与者们(participants)需要告诉他们的节点他们已经同意了这个更新。ContractUpgradeFlow
被用来管理这个授权的流程。对于将要执行的更新 state 的 flow,每个节点的管理员可以使用 RPC 来触发一个 Authorise
或 Deauthorise
的命令。
@StartableByRPC
class Authorise(
val stateAndRef: StateAndRef<*>,
private val upgradedContractClass: Class<out UpgradedContract<*, *>>
) : FlowLogic<Void?>() {
@StartableByRPC
class Deauthorise(val stateRef: StateRef) : FlowLogic<Void?>() {
@Suspendable
override fun call(): Void? {
提出一个更新¶
After all parties have authorised the contract upgrade for the state, one of the contract participants can initiate the
upgrade process by triggering the ContractUpgradeFlow.Initiate
flow. Initiate
creates a transaction including
the old state and the updated state, and sends it to each of the participants. Each participant will verify the
transaction, create a signature over it, and send the signature back to the initiator. Once all the signatures are
collected, the transaction will be notarised and persisted to every participant’s vault.
当所有的节点都批准了对于 state 的合约更新之后,某一个合约的参与者可以通过触发 ContractUpgradeFlow.Initiate
flow 来初始这个更新的流程。Initiate
创建了一个包含旧的 state 和更新的 state 的 transaction,然后发送给了每一个 state 的参与者。每个参与者都会检查这个 transaction,提供签名并将签名发送回给变更初始者。当收到所有的签名之后,这个 transaction 会被公正并且永久地被存储到每个参与者的账本(vault)中。
示例¶
Suppose Bank A has entered into an agreement with Bank B which is represented by the state object
DummyContractState
and governed by the contract code DummyContract
. A few days after the exchange of contracts,
the developer of the contract code discovers a bug in the contract code.
假设 Bank A 和 Bank B 达成了一个协议,这个协议以 DummyContractState
state 对象来表示并且由合约代码 DummyContract
来管理。当交换完合约之后的某天,合约代码的开发者发现额合约代码中的一个 bug。
Bank A and Bank B decide to upgrade the contract to DummyContractV2
:
Bank A 和 Bank B 决定将合约升级为 DummyContractV2
:
The developer creates a new contract
DummyContractV2
extending theUpgradedContract
class, and a new state objectDummyContractV2.State
referencing the new contract.开发者通过扩展
UpgradedContract
类创建了一个新的合约DummyContractV2
,和一个新的针对新合约的 state 对象DummyContractV2.State
。
- Bank A instructs its node to accept the contract upgrade to
DummyContractV2
for the contract state. Bank A 告诉他的节点接受这个针对于 contract state 的合约更新为DummyContractV2
。
val rpcClient : CordaRPCClient = << Bank A's Corda RPC Client >>
val rpcA = rpcClient.proxy()
rpcA.startFlow(ContractUpgradeFlow.Authorise(<<StateAndRef of the contract state>>, DummyContractV2::class.java))
Bank B initiates the upgrade flow, which will send an upgrade proposal to all contract participants. Each of the participants of the contract state will sign and return the contract state upgrade proposal once they have validated and agreed with the upgrade. The upgraded transaction will be recorded in every participant’s node by the flow.
Bank B 初始了这个升级的 flow,这会给所有的合约的参与者节点发送一个更新的提议。合约 state 的每个参与者,当他们验证完并且同意这个更新之后,将会提供签名并返回合约 state 的更新提议。Flow 会将被更新的 transaction 记录到每个参与节点中。
val rpcClient : CordaRPCClient = << Bank B's Corda RPC Client >>
val rpcB = rpcClient.proxy()
rpcB.startFlow({ stateAndRef, upgrade -> ContractUpgradeFlow(stateAndRef, upgrade) },
<<StateAndRef of the contract state>>,
DummyContractV2::class.java)
注解
See ContractUpgradeFlowTest
for more detailed code examples.
注解
查看 ContractUpgradeFlowTest
了解更多样例代码。
Integration testing¶
Integration testing involves bringing up nodes locally and testing invariants about them by starting flows and inspecting their state.
In this tutorial we will bring up three nodes - Alice, Bob and a notary. Alice will issue cash to Bob, then Bob will send this cash back to Alice. We will see how to test some simple deterministic and nondeterministic invariants in the meantime.
注解
This example where Alice is self-issuing cash is purely for demonstration purposes, in reality, cash would be issued by a bank and subsequently passed around.
In order to spawn nodes we will use the Driver DSL. This DSL allows one to start up node processes from code. It creates a local network where all the nodes see each other and provides safe shutting down of nodes in the background.
driver(DriverParameters(startNodesInProcess = true, cordappsForAllNodes = FINANCE_CORDAPPS)) {
val aliceUser = User("aliceUser", "testPassword1", permissions = setOf(
startFlow<CashIssueAndPaymentFlow>(),
invokeRpc("vaultTrackBy")
))
val bobUser = User("bobUser", "testPassword2", permissions = setOf(
startFlow<CashPaymentFlow>(),
invokeRpc("vaultTrackBy")
))
val (alice, bob) = listOf(
startNode(providedName = ALICE_NAME, rpcUsers = listOf(aliceUser)),
startNode(providedName = BOB_NAME, rpcUsers = listOf(bobUser))
).map { it.getOrThrow() }
driver(new DriverParameters()
.withStartNodesInProcess(true)
.withCordappsForAllNodes(FINANCE_CORDAPPS), dsl -> {
User aliceUser = new User("aliceUser", "testPassword1", new HashSet<>(asList(
startFlow(CashIssueAndPaymentFlow.class),
invokeRpc("vaultTrack")
)));
User bobUser = new User("bobUser", "testPassword2", new HashSet<>(asList(
startFlow(CashPaymentFlow.class),
invokeRpc("vaultTrack")
)));
try {
List<CordaFuture<NodeHandle>> nodeHandleFutures = asList(
dsl.startNode(new NodeParameters().withProvidedName(ALICE_NAME).withRpcUsers(singletonList(aliceUser))),
dsl.startNode(new NodeParameters().withProvidedName(BOB_NAME).withRpcUsers(singletonList(bobUser)))
);
NodeHandle alice = nodeHandleFutures.get(0).get();
NodeHandle bob = nodeHandleFutures.get(1).get();
The above code starts two nodes:
- Alice, configured with an RPC user who has permissions to start the
CashIssueAndPaymentFlow
flow on it and query Alice’s vault. - Bob, configured with an RPC user who only has permissions to start the
CashPaymentFlow
and query Bob’s vault.
注解
You will notice that we did not start a notary. This is done automatically for us by the driver - it creates
a notary node with the name DUMMY_NOTARY_NAME
which is visible to both nodes. If you wish to customise this, for
example create more notaries, then specify the DriverParameters.notarySpecs
parameter.
The startNode
function returns a CordaFuture
object that completes once the node is fully started and visible on
the local network. Returning a future allows starting of the nodes to be parallel. We wait on these futures as we need
the information returned; their respective NodeHandles
s.
val aliceClient = CordaRPCClient(alice.rpcAddress)
val aliceProxy: CordaRPCOps = aliceClient.start("aliceUser", "testPassword1").proxy
val bobClient = CordaRPCClient(bob.rpcAddress)
val bobProxy: CordaRPCOps = bobClient.start("bobUser", "testPassword2").proxy
CordaRPCClient aliceClient = new CordaRPCClient(alice.getRpcAddress());
CordaRPCOps aliceProxy = aliceClient.start("aliceUser", "testPassword1").getProxy();
CordaRPCClient bobClient = new CordaRPCClient(bob.getRpcAddress());
CordaRPCOps bobProxy = bobClient.start("bobUser", "testPassword2").getProxy();
Next we connect to Alice and Bob from the test process using the test users we created. We establish RPC links that allow us to start flows and query state.
val bobVaultUpdates: Observable<Vault.Update<Cash.State>> = bobProxy.vaultTrackBy<Cash.State>().updates
val aliceVaultUpdates: Observable<Vault.Update<Cash.State>> = aliceProxy.vaultTrackBy<Cash.State>().updates
Observable<Vault.Update<Cash.State>> bobVaultUpdates = bobProxy.vaultTrack(Cash.State.class).getUpdates();
Observable<Vault.Update<Cash.State>> aliceVaultUpdates = aliceProxy.vaultTrack(Cash.State.class).getUpdates();
We will be interested in changes to Alice’s and Bob’s vault, so we query a stream of vault updates from each.
Now that we’re all set up we can finally get some cash action going!
val issueRef = OpaqueBytes.of(0)
aliceProxy.startFlow(::CashIssueAndPaymentFlow,
1000.DOLLARS,
issueRef,
bob.nodeInfo.singleIdentity(),
true,
defaultNotaryIdentity
).returnValue.getOrThrow()
bobVaultUpdates.expectEvents {
expect { update ->
println("Bob got vault update of $update")
val amount: Amount<Issued<Currency>> = update.produced.first().state.data.amount
assertEquals(1000.DOLLARS, amount.withoutIssuer())
}
}
OpaqueBytes issueRef = OpaqueBytes.of((byte)0);
aliceProxy.startFlowDynamic(
CashIssueAndPaymentFlow.class,
DOLLARS(1000),
issueRef,
bob.getNodeInfo().getLegalIdentities().get(0),
true,
dsl.getDefaultNotaryIdentity()
).getReturnValue().get();
@SuppressWarnings("unchecked")
Class<Vault.Update<Cash.State>> cashVaultUpdateClass = (Class<Vault.Update<Cash.State>>)(Class<?>)Vault.Update.class;
expectEvents(bobVaultUpdates, true, () ->
expect(cashVaultUpdateClass, update -> true, update -> {
System.out.println("Bob got vault update of " + update);
Amount<Issued<Currency>> amount = update.getProduced().iterator().next().getState().getData().getAmount();
assertEquals(DOLLARS(1000), Structures.withoutIssuer(amount));
return null;
})
);
We start a CashIssueAndPaymentFlow
flow on the Alice node. We specify that we want Alice to self-issue $1000 which is
to be payed to Bob. We specify the default notary identity created by the driver as the notary responsible for notarising
the created states. Note that no notarisation will occur yet as we’re not spending any states, only creating new ones on
the ledger.
We expect a single update to Bob’s vault when it receives the $1000 from Alice. This is what the expectEvents
call
is asserting.
bobProxy.startFlow(::CashPaymentFlow, 1000.DOLLARS, alice.nodeInfo.singleIdentity()).returnValue.getOrThrow()
aliceVaultUpdates.expectEvents {
expect { update ->
println("Alice got vault update of $update")
val amount: Amount<Issued<Currency>> = update.produced.first().state.data.amount
assertEquals(1000.DOLLARS, amount.withoutIssuer())
}
}
bobProxy.startFlowDynamic(
CashPaymentFlow.class,
DOLLARS(1000),
alice.getNodeInfo().getLegalIdentities().get(0)
).getReturnValue().get();
expectEvents(aliceVaultUpdates, true, () ->
expect(cashVaultUpdateClass, update -> true, update -> {
System.out.println("Alice got vault update of " + update);
Amount<Issued<Currency>> amount = update.getProduced().iterator().next().getState().getData().getAmount();
assertEquals(DOLLARS(1000), Structures.withoutIssuer(amount));
return null;
})
);
Next we want Bob to send this cash back to Alice.
That’s it! We saw how to start up several corda nodes locally, how to connect to them, and how to test some simple invariants
about CashIssueAndPaymentFlow
and CashPaymentFlow
.
You can find the complete test at example-code/src/integration-test/java/net/corda/docs/java/tutorial/test/JavaIntegrationTestingTutorial.java
(Java) and example-code/src/integration-test/kotlin/net/corda/docs/kotlin/tutorial/test/KotlinIntegrationTestingTutorial.kt
(Kotlin) in the
Corda repo.
注解
To make sure the driver classes are included in your project you will need the following in your build.gradle
file in the module in
which you want to test:
testCompile "$corda_release_group:corda-node-driver:$corda_release_version"
Using the client RPC API¶
In this tutorial we will build a simple command line utility that connects to a node, creates some cash transactions and dumps the transaction graph to the standard output. We will then put some simple visualisation on top. For an explanation on how RPC works in Corda see 与节点互动.
We start off by connecting to the node itself. For the purposes of the tutorial we will use the Driver to start up a notary and a Alice node that can issue, move and exit cash.
Here’s how we configure the node to create a user that has the permissions to start the CashIssueFlow
,
CashPaymentFlow
, and CashExitFlow
:
enum class PrintOrVisualise {
Print,
Visualise
}
@Suppress("DEPRECATION")
fun main(args: Array<String>) {
require(args.isNotEmpty()) { "Usage: <binary> [Print|Visualise]" }
val printOrVisualise = PrintOrVisualise.valueOf(args[0])
val baseDirectory = Paths.get("build/rpc-api-tutorial")
val user = User("user", "password", permissions = setOf(startFlow<CashIssueFlow>(),
startFlow<CashPaymentFlow>(),
startFlow<CashExitFlow>(),
invokeRpc(CordaRPCOps::nodeInfo)
))
driver(DriverParameters(driverDirectory = baseDirectory, cordappsForAllNodes = FINANCE_CORDAPPS, waitForAllNodesToFinish = true)) {
val node = startNode(providedName = ALICE_NAME, rpcUsers = listOf(user)).get()
Now we can connect to the node itself using a valid RPC user login and start generating transactions in a different
thread using generateTransactions
(to be defined later):
val client = CordaRPCClient(node.rpcAddress)
val proxy = client.start("user", "password").proxy
thread {
generateTransactions(proxy)
}
proxy
exposes the full RPC interface of the node:
/** Returns a list of currently in-progress state machine infos. */
fun stateMachinesSnapshot(): List<StateMachineInfo>
/**
* Returns a data feed of currently in-progress state machine infos and an observable of
* future state machine adds/removes.
*/
@RPCReturnsObservables
fun stateMachinesFeed(): DataFeed<List<StateMachineInfo>, StateMachineUpdate>
/**
* Returns a snapshot of vault states for a given query criteria (and optional order and paging specification)
*
* Generic vault query function which takes a [QueryCriteria] object to define filters,
* optional [PageSpecification] and optional [Sort] modification criteria (default unsorted),
* and returns a [Vault.Page] object containing the following:
* 1. states as a List of <StateAndRef> (page number and size defined by [PageSpecification])
* 2. states metadata as a List of [Vault.StateMetadata] held in the Vault States table.
* 3. total number of results available if [PageSpecification] supplied (otherwise returns -1)
* 4. status types used in this query: UNCONSUMED, CONSUMED, ALL
* 5. other results (aggregate functions with/without using value groups)
*
* @throws VaultQueryException if the query cannot be executed for any reason
* (missing criteria or parsing error, paging errors, unsupported query, underlying database error)
*
* Notes
* If no [PageSpecification] is provided, a maximum of [DEFAULT_PAGE_SIZE] results will be returned.
* API users must specify a [PageSpecification] if they are expecting more than [DEFAULT_PAGE_SIZE] results,
* otherwise a [VaultQueryException] will be thrown alerting to this condition.
* It is the responsibility of the API user to request further pages and/or specify a more suitable [PageSpecification].
*/
// DOCSTART VaultQueryByAPI
@RPCReturnsObservables
fun <T : ContractState> vaultQueryBy(criteria: QueryCriteria,
paging: PageSpecification,
sorting: Sort,
contractStateType: Class<out T>): Vault.Page<T>
// DOCEND VaultQueryByAPI
// Note: cannot apply @JvmOverloads to interfaces nor interface implementations
// Java Helpers
// DOCSTART VaultQueryAPIHelpers
fun <T : ContractState> vaultQuery(contractStateType: Class<out T>): Vault.Page<T>
fun <T : ContractState> vaultQueryByCriteria(criteria: QueryCriteria, contractStateType: Class<out T>): Vault.Page<T>
fun <T : ContractState> vaultQueryByWithPagingSpec(contractStateType: Class<out T>, criteria: QueryCriteria, paging: PageSpecification): Vault.Page<T>
fun <T : ContractState> vaultQueryByWithSorting(contractStateType: Class<out T>, criteria: QueryCriteria, sorting: Sort): Vault.Page<T>
// DOCEND VaultQueryAPIHelpers
/**
* Returns a snapshot (as per queryBy) and an observable of future updates to the vault for the given query criteria.
*
* Generic vault query function which takes a [QueryCriteria] object to define filters,
* optional [PageSpecification] and optional [Sort] modification criteria (default unsorted),
* and returns a [DataFeed] object containing
* 1) a snapshot as a [Vault.Page] (described previously in [CordaRPCOps.vaultQueryBy])
* 2) an [Observable] of [Vault.Update]
*
* Notes: the snapshot part of the query adheres to the same behaviour as the [CordaRPCOps.vaultQueryBy] function.
* the [QueryCriteria] applies to both snapshot and deltas (streaming updates).
*/
// DOCSTART VaultTrackByAPI
@RPCReturnsObservables
fun <T : ContractState> vaultTrackBy(criteria: QueryCriteria,
paging: PageSpecification,
sorting: Sort,
contractStateType: Class<out T>): DataFeed<Vault.Page<T>, Vault.Update<T>>
// DOCEND VaultTrackByAPI
// Note: cannot apply @JvmOverloads to interfaces nor interface implementations
// Java Helpers
// DOCSTART VaultTrackAPIHelpers
fun <T : ContractState> vaultTrack(contractStateType: Class<out T>): DataFeed<Vault.Page<T>, Vault.Update<T>>
fun <T : ContractState> vaultTrackByCriteria(contractStateType: Class<out T>, criteria: QueryCriteria): DataFeed<Vault.Page<T>, Vault.Update<T>>
fun <T : ContractState> vaultTrackByWithPagingSpec(contractStateType: Class<out T>, criteria: QueryCriteria, paging: PageSpecification): DataFeed<Vault.Page<T>, Vault.Update<T>>
fun <T : ContractState> vaultTrackByWithSorting(contractStateType: Class<out T>, criteria: QueryCriteria, sorting: Sort): DataFeed<Vault.Page<T>, Vault.Update<T>>
// DOCEND VaultTrackAPIHelpers
/**
* @suppress Returns a list of all recorded transactions.
*
* TODO This method should be removed once SGX work is finalised and the design of the corresponding API using [FilteredTransaction] can be started
*/
@Deprecated("This method is intended only for internal use and will be removed from the public API soon.")
fun internalVerifiedTransactionsSnapshot(): List<SignedTransaction>
/**
* @suppress Returns the full transaction for the provided ID
*
* TODO This method should be removed once SGX work is finalised and the design of the corresponding API using [FilteredTransaction] can be started
*/
@CordaInternal
@Deprecated("This method is intended only for internal use and will be removed from the public API soon.")
fun internalFindVerifiedTransaction(txnId: SecureHash): SignedTransaction?
/**
* @suppress Returns a data feed of all recorded transactions and an observable of future recorded ones.
*
* TODO This method should be removed once SGX work is finalised and the design of the corresponding API using [FilteredTransaction] can be started
*/
@Deprecated("This method is intended only for internal use and will be removed from the public API soon.")
@RPCReturnsObservables
fun internalVerifiedTransactionsFeed(): DataFeed<List<SignedTransaction>, SignedTransaction>
/** Returns a snapshot list of existing state machine id - recorded transaction hash mappings. */
fun stateMachineRecordedTransactionMappingSnapshot(): List<StateMachineTransactionMapping>
/**
* Returns a snapshot list of existing state machine id - recorded transaction hash mappings, and a stream of future
* such mappings as well.
*/
@RPCReturnsObservables
fun stateMachineRecordedTransactionMappingFeed(): DataFeed<List<StateMachineTransactionMapping>, StateMachineTransactionMapping>
/** Returns all parties currently visible on the network with their advertised services. */
fun networkMapSnapshot(): List<NodeInfo>
/**
* Returns all parties currently visible on the network with their advertised services and an observable of
* future updates to the network.
*/
@RPCReturnsObservables
fun networkMapFeed(): DataFeed<List<NodeInfo>, NetworkMapCache.MapChange>
/** Returns the network parameters the node is operating under. */
val networkParameters: NetworkParameters
/**
* Returns [DataFeed] object containing information on currently scheduled parameters update (null if none are currently scheduled)
* and observable with future update events. Any update that occurs before the deadline automatically cancels the current one.
* Only the latest update can be accepted.
* Note: This operation may be restricted only to node administrators.
*/
// TODO This operation should be restricted to just node admins.
@RPCReturnsObservables
fun networkParametersFeed(): DataFeed<ParametersUpdateInfo?, ParametersUpdateInfo>
/**
* Accept network parameters with given hash, hash is obtained through [networkParametersFeed] method.
* Information is sent back to the zone operator that the node accepted the parameters update - this process cannot be
* undone.
* Only parameters that are scheduled for update can be accepted, if different hash is provided this method will fail.
* Note: This operation may be restricted only to node administrators.
* @param parametersHash hash of network parameters to accept
* @throws IllegalArgumentException if network map advertises update with different parameters hash then the one accepted by node's operator.
* @throws IOException if failed to send the approval to network map
*/
// TODO This operation should be restricted to just node admins.
fun acceptNewNetworkParameters(parametersHash: SecureHash)
/**
* Start the given flow with the given arguments. [logicType] must be annotated
* with [net.corda.core.flows.StartableByRPC].
*/
@RPCReturnsObservables
fun <T> startFlowDynamic(logicType: Class<out FlowLogic<T>>, vararg args: Any?): FlowHandle<T>
/**
* Start the given flow with the given arguments, returning an [Observable] with a single observation of the
* result of running the flow. [logicType] must be annotated with [net.corda.core.flows.StartableByRPC].
*/
@RPCReturnsObservables
fun <T> startTrackedFlowDynamic(logicType: Class<out FlowLogic<T>>, vararg args: Any?): FlowProgressHandle<T>
/**
* Attempts to kill a flow. This is not a clean termination and should be reserved for exceptional cases such as stuck fibers.
*
* @return whether the flow existed and was killed.
*/
fun killFlow(id: StateMachineRunId): Boolean
/** Returns Node's NodeInfo, assuming this will not change while the node is running. */
fun nodeInfo(): NodeInfo
/**
* Returns network's notary identities, assuming this will not change while the node is running.
*
* Note that the identities are sorted based on legal name, and the ordering might change once new notaries are introduced.
*/
fun notaryIdentities(): List<Party>
/** Add note(s) to an existing Vault transaction. */
fun addVaultTransactionNote(txnId: SecureHash, txnNote: String)
/** Retrieve existing note(s) for a given Vault transaction. */
fun getVaultTransactionNotes(txnId: SecureHash): Iterable<String>
/** Checks whether an attachment with the given hash is stored on the node. */
fun attachmentExists(id: SecureHash): Boolean
/** Download an attachment JAR by ID. */
fun openAttachment(id: SecureHash): InputStream
/** Uploads a jar to the node, returns it's hash. */
@Throws(java.nio.file.FileAlreadyExistsException::class)
fun uploadAttachment(jar: InputStream): SecureHash
/** Uploads a jar including metadata to the node, returns it's hash. */
@Throws(java.nio.file.FileAlreadyExistsException::class)
fun uploadAttachmentWithMetadata(jar: InputStream, uploader: String, filename: String): SecureHash
/** Queries attachments metadata */
fun queryAttachments(query: AttachmentQueryCriteria, sorting: AttachmentSort?): List<AttachmentId>
/** Returns the node's current time. */
fun currentNodeTime(): Instant
/**
* Returns a [CordaFuture] which completes when the node has registered wih the network map service. It can also
* complete with an exception if it is unable to.
*/
@RPCReturnsObservables
fun waitUntilNetworkReady(): CordaFuture<Void?>
// TODO These need rethinking. Instead of these direct calls we should have a way of replicating a subset of
// the node's state locally and query that directly.
/**
* Returns the well known identity from an abstract party. This is intended to resolve the well known identity
* from a confidential identity, however it transparently handles returning the well known identity back if
* a well known identity is passed in.
*
* @param party identity to determine well known identity for.
* @return well known identity, if found.
*/
fun wellKnownPartyFromAnonymous(party: AbstractParty): Party?
/** Returns the [Party] corresponding to the given key, if found. */
fun partyFromKey(key: PublicKey): Party?
/** Returns the [Party] with the X.500 principal as it's [Party.name]. */
fun wellKnownPartyFromX500Name(x500Name: CordaX500Name): Party?
/**
* Get a notary identity by name.
*
* @return the notary identity, or null if there is no notary by that name. Note that this will return null if there
* is a peer with that name but they are not a recognised notary service.
*/
fun notaryPartyFromX500Name(x500Name: CordaX500Name): Party?
/**
* Returns a list of candidate matches for a given string, with optional fuzzy(ish) matching. Fuzzy matching may
* get smarter with time e.g. to correct spelling errors, so you should not hard-code indexes into the results
* but rather show them via a user interface and let the user pick the one they wanted.
*
* @param query The string to check against the X.500 name components
* @param exactMatch If true, a case sensitive match is done against each component of each X.500 name.
*/
fun partiesFromName(query: String, exactMatch: Boolean): Set<Party>
/** Enumerates the class names of the flows that this node knows about. */
fun registeredFlows(): List<String>
/**
* Returns a node's info from the network map cache, where known.
* Notice that when there are more than one node for a given name (in case of distributed services) first service node
* found will be returned.
*
* @return the node info if available.
*/
fun nodeInfoFromParty(party: AbstractParty): NodeInfo?
/**
* Clear all network map data from local node cache. Notice that after invoking this method your node will lose
* network map data and effectively won't be able to start any flow with the peers until network map is downloaded
* again on next poll - from `additional-node-infos` directory or from network map server. It depends on the
* polling interval when it happens. You can also use [refreshNetworkMapCache] to force next fetch from network map server
* (not from directory - it will happen automatically).
* If you run local test deployment and want clear view of the network, you may want to clear also `additional-node-infos`
* directory, because cache can be repopulated from there.
*/
fun clearNetworkMapCache()
/**
* Poll network map server if available for the network map. Notice that you need to have `compatibilityZone`
* or `networkServices` configured. This is normally done automatically on the regular time interval, but you may wish to
* have the fresh view of network earlier.
*/
fun refreshNetworkMapCache()
/** Sets the value of the node's flows draining mode.
* If this mode is [enabled], the node will reject new flows through RPC, ignore scheduled flows, and do not process
* initial session messages, meaning that P2P counterparties will not be able to initiate new flows involving the node.
*
* @param enabled whether the flows draining mode will be enabled.
* */
fun setFlowsDrainingModeEnabled(enabled: Boolean)
/**
* Returns whether the flows draining mode is enabled.
*
* @see setFlowsDrainingModeEnabled
*/
fun isFlowsDrainingModeEnabled(): Boolean
/**
* Shuts the node down. Returns immediately.
* This does not wait for flows to be completed.
*/
fun shutdown()
/**
* Shuts the node down. Returns immediately.
* @param drainPendingFlows whether the node will wait for pending flows to be completed before exiting. While draining, new flows from RPC will be rejected.
*/
fun terminate(drainPendingFlows: Boolean = false)
/**
* Returns whether the node is waiting for pending flows to complete before shutting down.
* Disabling draining mode cancels this state.
*
* @return whether the node will shutdown when the pending flows count reaches zero.
*/
fun isWaitingForShutdown(): Boolean
The RPC operation we need in order to dump the transaction graph is internalVerifiedTransactionsFeed
. The type
signature tells us that the RPC operation will return a list of transactions and an Observable
stream. This is a
general pattern, we query some data and the node will return the current snapshot and future updates done to it.
Observables are described in further detail in 与节点互动
val (transactions: List<SignedTransaction>, futureTransactions: Observable<SignedTransaction>) = proxy.internalVerifiedTransactionsFeed()
The graph will be defined as follows:
- Each transaction is a vertex, represented by printing
NODE <txhash>
- Each input-output relationship is an edge, represented by printing
EDGE <txhash> <txhash>
when (printOrVisualise) {
PrintOrVisualise.Print -> {
futureTransactions.startWith(transactions).subscribe { transaction ->
println("NODE ${transaction.id}")
transaction.tx.inputs.forEach { (txhash) ->
println("EDGE $txhash ${transaction.id}")
}
}
}
Now we just need to create the transactions themselves!
fun generateTransactions(proxy: CordaRPCOps) {
val vault = proxy.vaultQueryBy<Cash.State>().states
var ownedQuantity = vault.fold(0L) { sum, state ->
sum + state.state.data.amount.quantity
}
val issueRef = OpaqueBytes.of(0)
val notary = proxy.notaryIdentities().first()
val me = proxy.nodeInfo().legalIdentities.first()
while (true) {
Thread.sleep(1000)
val random = SplittableRandom()
val n = random.nextDouble()
if (ownedQuantity > 10000 && n > 0.8) {
val quantity = Math.abs(random.nextLong()) % 2000
proxy.startFlow(::CashExitFlow, Amount(quantity, USD), issueRef)
ownedQuantity -= quantity
} else if (ownedQuantity > 1000 && n < 0.7) {
val quantity = Math.abs(random.nextLong() % Math.min(ownedQuantity, 2000))
proxy.startFlow(::CashPaymentFlow, Amount(quantity, USD), me)
} else {
val quantity = Math.abs(random.nextLong() % 1000)
proxy.startFlow(::CashIssueFlow, Amount(quantity, USD), issueRef, notary)
ownedQuantity += quantity
}
}
}
We utilise several RPC functions here to query things like the notaries in the node cluster or our own vault. These RPC
functions also return Observable
objects so that the node can send us updated values. However, we don’t need updates
here and so we mark these observables as notUsed
(as a rule, you should always either subscribe to an Observable
or mark it as not used. Failing to do so will leak resources in the node).
Then in a loop we generate randomly either an Issue, a Pay or an Exit transaction.
The RPC we need to initiate a cash transaction is startFlow
which starts an arbitrary flow given sufficient
permissions to do so.
Finally we have everything in place: we start a couple of nodes, connect to them, and start creating transactions while listening on successfully created ones, which are dumped to the console. We just need to run it!
# Build the example
./gradlew docs/source/example-code:installDist
# Start it
./docs/source/example-code/build/install/docs/source/example-code/bin/client-rpc-tutorial Print
Now let’s try to visualise the transaction graph. We will use a graph drawing library called graphstream.
PrintOrVisualise.Visualise -> {
val graph = MultiGraph("transactions")
transactions.forEach { transaction ->
graph.addNode<Node>("${transaction.id}")
}
transactions.forEach { transaction ->
transaction.tx.inputs.forEach { ref ->
graph.addEdge<Edge>("$ref", "${ref.txhash}", "${transaction.id}")
}
}
futureTransactions.subscribe { transaction ->
graph.addNode<Node>("${transaction.id}")
transaction.tx.inputs.forEach { ref ->
graph.addEdge<Edge>("$ref", "${ref.txhash}", "${transaction.id}")
}
}
graph.display()
}
}
If we run the client with Visualise
we should see a simple random graph being drawn as new transactions are being created.
Whitelisting classes from your CorDapp with the Corda node¶
As described in 与节点互动, you have to whitelist any additional classes you add that are needed in RPC requests or responses with the Corda node. Here’s an example of both ways you can do this for a couple of example classes.
// Not annotated, so need to whitelist manually.
data class ExampleRPCValue(val foo: String)
// Annotated, so no need to whitelist manually.
@CordaSerializable
data class ExampleRPCValue2(val bar: Int)
class ExampleRPCSerializationWhitelist : SerializationWhitelist {
// Add classes like this.
override val whitelist = listOf(ExampleRPCValue::class.java)
}
See more on plugins in Running nodes locally.
Security¶
RPC credentials associated with a Client must match the permission set configured on the server node. This refers to both authentication (username and password) and role-based authorisation (a permissioned set of RPC operations an authenticated user is entitled to run).
注解
Permissions are represented as String’s to allow RPC implementations to add their own permissioning. Currently
the only permission type defined is StartFlow, which defines a list of whitelisted flows an authenticated use may
execute. An administrator user (or a developer) may also be assigned the ALL
permission, which grants access to
any flow.
In the instructions above the server node permissions are configured programmatically in the driver code:
driver(driverDirectory = baseDirectory) {
val user = User("user", "password", permissions = setOf(startFlow<CashFlow>()))
val node = startNode("CN=Alice Corp,O=Alice Corp,L=London,C=GB", rpcUsers = listOf(user)).get()
When starting a standalone node using a configuration file we must supply the RPC credentials as follows:
rpcUsers : [
{ username=user, password=password, permissions=[ StartFlow.net.corda.finance.flows.CashFlow ] }
]
When using the gradle Cordformation plugin to configure and deploy a node you must supply the RPC credentials in a similar manner:
rpcUsers = [
['username' : "user",
'password' : "password",
'permissions' : ["StartFlow.net.corda.finance.flows.CashFlow"]]
]
You can then deploy and launch the nodes (Notary and Alice) as follows:
# to create a set of configs and installs under ``docs/source/example-code/build/nodes`` run
./gradlew docs/source/example-code:deployNodes
# to open up two new terminals with the two nodes run
./docs/source/example-code/build/nodes/runnodes
# followed by the same commands as before:
./docs/source/example-code/build/install/docs/source/example-code/bin/client-rpc-tutorial Print
./docs/source/example-code/build/install/docs/source/example-code/bin/client-rpc-tutorial Visualise
With regards to the start flow RPCs, there is an extra layer of security whereby the flow to be executed has to be
annotated with @StartableByRPC
. Flows without this annotation cannot execute using RPC.
See more on security in 安全编码指南, node configuration in 节点的配置 and Cordformation in Running nodes locally.
Building transactions¶
Introduction¶
Understanding and implementing transactions in Corda is key to building and implementing real world smart contracts. It is only through construction of valid Corda transactions containing appropriate data that nodes on the ledger can map real world business objects into a shared digital view of the data in the Corda ledger. More importantly as the developer of new smart contracts it is the code which determines what data is well formed and what data should be rejected as mistakes, or to prevent malicious activity. This document details some of the considerations and APIs used to when constructing transactions as part of a flow.
The Basic Lifecycle Of Transactions¶
Transactions in Corda contain a number of elements:
- A set of Input state references that will be consumed by the final accepted transaction
- A set of Output states to create/replace the consumed states and thus become the new latest versions of data on the ledger
- A set of
Attachment
items which can contain legal documents, contract code, or private encrypted sections as an extension beyond the native contract states - A set of
Command
items which indicate the type of ledger transition that is encoded in the transaction. Each command also has an associated set of signer keys, which will be required to sign the transaction - A signers list, which is the union of the signers on the individual Command objects
- A notary identity to specify which notary node is tracking the
state consumption (if the transaction’s input states are registered with different
notary nodes the flow will have to insert additional
NotaryChange
transactions to migrate the states across to a consistent notary node before being allowed to mutate any states) - Optionally a time-window that can used by the notary to bound the period during which the proposed transaction can be committed to the ledger
A transaction is built by populating a TransactionBuilder
. Once the builder is fully populated, the flow should freeze the TransactionBuilder
by signing it to create a SignedTransaction
. This is key to the ledger agreement process - once a flow has attached a node’s signature to a transaction, it has effectively stated that it accepts all the details of the transaction.
It is best practice for flows to receive back the TransactionSignature
of other parties rather than a full
SignedTransaction
objects, because otherwise we have to separately check that this is still the same
SignedTransaction
and not a malicious substitute.
The final stage of committing the transaction to the ledger is to notarise the SignedTransaction
, distribute it to
all appropriate parties and record the data into the ledger. These actions are best delegated to the FinalityFlow
,
rather than calling the individual steps manually. However, do note that the final broadcast to the other nodes is
asynchronous, so care must be used in unit testing to correctly await the vault updates.
Gathering Inputs¶
One of the first steps to forming a transaction is gathering the set of
input references. This process will clearly vary according to the nature
of the business process being captured by the smart contract and the
parameterised details of the request. However, it will generally involve
searching the vault via the VaultService
interface on the
ServiceHub
to locate the input states.
To give a few more specific details consider two simplified real world
scenarios. First, a basic foreign exchange cash transaction. This
transaction needs to locate a set of funds to exchange. A flow
modelling this is implemented in FxTransactionBuildTutorial.kt
(in the main Corda repo).
Second, a simple business model in which parties manually accept or
reject each other’s trade proposals, which is implemented in
WorkflowTransactionBuildTutorial.kt
(in the
main Corda repo). To run and explore these
examples using the IntelliJ IDE one can run/step through the respective unit
tests in FxTransactionBuildTutorialTest.kt
and
WorkflowTransactionBuildTutorialTest.kt
, which drive the flows as
part of a simulated in-memory network of nodes.
注解
Before creating the IntelliJ run configurations for these unit tests
go to Run -> Edit Configurations -> Defaults -> JUnit, add
-javaagent:lib/quasar.jar
to the VM options, and set Working directory to $PROJECT_DIR$
so that the Quasar
instrumentation is correctly configured.
For the cash transaction, let’s assume we are using the
standard CashState
in the :financial
Gradle module. The Cash
contract uses FungibleAsset
states to model holdings of
interchangeable assets and allow the splitting, merging and summing of
states to meet a contractual obligation. We would normally use the
Cash.generateSpend
method to gather the required
amount of cash into a TransactionBuilder
, set the outputs and generate the Move
command. However, to make things clearer, the example flow code shown
here will manually carry out the input queries by specifying relevant
query criteria filters to the tryLockFungibleStatesForSpending
method
of the VaultService
.
// This is equivalent to the Cash.generateSpend
// Which is brought here to make the filtering logic more visible in the example
private fun gatherOurInputs(serviceHub: ServiceHub,
lockId: UUID,
amountRequired: Amount<Issued<Currency>>,
notary: Party?): Pair<List<StateAndRef<Cash.State>>, Long> {
// extract our identity for convenience
val ourKeys = serviceHub.keyManagementService.keys
val ourParties = ourKeys.map { serviceHub.identityService.partyFromKey(it) ?: throw IllegalStateException("Unable to resolve party from key") }
val fungibleCriteria = QueryCriteria.FungibleAssetQueryCriteria(owner = ourParties)
val notaries = notary ?: serviceHub.networkMapCache.notaryIdentities.first()
val vaultCriteria: QueryCriteria = QueryCriteria.VaultQueryCriteria(notary = listOf(notaries as AbstractParty))
val logicalExpression = builder { CashSchemaV1.PersistentCashState::currency.equal(amountRequired.token.product.currencyCode) }
val cashCriteria = QueryCriteria.VaultCustomQueryCriteria(logicalExpression)
val fullCriteria = fungibleCriteria.and(vaultCriteria).and(cashCriteria)
val eligibleStates = serviceHub.vaultService.tryLockFungibleStatesForSpending(lockId, fullCriteria, amountRequired.withoutIssuer(), Cash.State::class.java)
check(eligibleStates.isNotEmpty()) { "Insufficient funds" }
val amount = eligibleStates.fold(0L) { tot, (state) -> tot + state.data.amount.quantity }
val change = amount - amountRequired.quantity
return Pair(eligibleStates, change)
}
This is a foreign exchange transaction, so we expect another set of input states of another currency from a
counterparty. However, the Corda privacy model means we are not aware of the other node’s states. Our flow must
therefore ask the other node to carry out a similar query and return the additional inputs to the transaction (see the
ForeignExchangeFlow
for more details of the exchange). We now have all the required input StateRef
items, and
can turn to gathering the outputs.
For the trade approval flow we need to implement a simple workflow
pattern. We start by recording the unconfirmed trade details in a state
object implementing the LinearState
interface. One field of this
record is used to map the business workflow to an enumerated state.
Initially the initiator creates a new state object which receives a new
UniqueIdentifier
in its linearId
property and a starting
workflow state of NEW
. The Contract.verify
method is written to
allow the initiator to sign this initial transaction and send it to the
other party. This pattern ensures that a permanent copy is recorded on
both ledgers for audit purposes, but the state is prevented from being
maliciously put in an approved state. The subsequent workflow steps then
follow with transactions that consume the state as inputs on one side
and output a new version with whatever state updates, or amendments
match to the business process, the linearId
being preserved across
the changes. Attached Command
objects help the verify method
restrict changes to appropriate fields and signers at each step in the
workflow. In this it is typical to have both parties sign the change
transactions, but it can be valid to allow unilateral signing, if for instance
one side could block a rejection. Commonly the manual initiator of these
workflows will query the Vault for states of the right contract type and
in the right workflow state over the RPC interface. The RPC will then
initiate the relevant flow using StateRef
, or linearId
values as
parameters to the flow to identify the states being operated upon. Thus
code to gather the latest input state for a given StateRef
would use
the VaultService
as follows:
val criteria = VaultQueryCriteria(stateRefs = listOf(ref))
val latestRecord = serviceHub.vaultService.queryBy<TradeApprovalContract.State>(criteria).states.single()
Generating Commands¶
For the commands that will be added to the transaction, these will need
to correctly reflect the task at hand. These must match because inside
the Contract.verify
method the command will be used to select the
validation code path. The Contract.verify
method will then restrict
the allowed contents of the transaction to reflect this context. Typical
restrictions might include that the input cash amount must equal the
output cash amount, or that a workflow step is only allowed to change
the status field. Sometimes, the command may capture some data too e.g.
the foreign exchange rate, or the identity of one party, or the StateRef
of the specific input that originates the command in a bulk operation.
This data will be used to further aid the Contract.verify
, because
to ensure consistent, secure and reproducible behaviour in a distributed
environment the Contract.verify
, transaction is the only allowed to
use the content of the transaction to decide validity.
Another essential requirement for commands is that the correct set of
PublicKey
objects are added to the Command
on the builder, which will be
used to form the set of required signers on the final validated
transaction. These must correctly align with the expectations of the
Contract.verify
method, which should be written to defensively check
this. In particular, it is expected that at minimum the owner of an
asset would have to be signing to permission transfer of that asset. In
addition, other signatories will often be required e.g. an Oracle
identity for an Oracle command, or both parties when there is an
exchange of assets.
Generating Outputs¶
Having located a StateAndRefs
set as the transaction inputs, the
flow has to generate the output states. Typically, this is a simple call
to the Kotlin copy
method to modify the few fields that will
transitioned in the transaction. The contract code may provide a
generateXXX
method to help with this process if the task is more
complicated. With a workflow state a slightly modified copy state is
usually sufficient, especially as it is expected that we wish to preserve
the linearId
between state revisions, so that Vault queries can find
the latest revision.
For fungible contract states such as cash
it is common to distribute
and split the total amount e.g. to produce a remaining balance output
state for the original owner when breaking up a large amount input
state. Remember that the result of a successful transaction is always to
fully consume/spend the input states, so this is required to conserve
the total cash. For example from the demo code:
// Gather our inputs. We would normally use VaultService.generateSpend
// to carry out the build in a single step. To be more explicit
// we will use query manually in the helper function below.
// Putting this into a non-suspendable function also prevents issues when
// the flow is suspended.
val (inputs, residual) = gatherOurInputs(serviceHub, lockId, sellAmount, request.notary)
// Build and an output state for the counterparty
val transferredFundsOutput = Cash.State(sellAmount, request.counterparty)
val outputs = if (residual > 0L) {
// Build an output state for the residual change back to us
val residualAmount = Amount(residual, sellAmount.token)
val residualOutput = Cash.State(residualAmount, serviceHub.myInfo.singleIdentity())
listOf(transferredFundsOutput, residualOutput)
} else {
listOf(transferredFundsOutput)
}
return Pair(inputs, outputs)
Building the SignedTransaction¶
Having gathered all the components for the transaction we now need to use a TransactionBuilder
to construct the
full SignedTransaction
. We instantiate a TransactionBuilder
and provide a notary that will be associated with
the output states. Then we keep adding inputs, outputs, commands and attachments to complete the transaction.
Once the transaction is fully formed, we call ServiceHub.signInitialTransaction
to sign the TransactionBuilder
and convert it into a SignedTransaction
.
Examples of this process are:
// Modify the state field for new output. We use copy, to ensure no other modifications.
// It is especially important for a LinearState that the linearId is copied across,
// not accidentally assigned a new random id.
val newState = latestRecord.state.data.copy(state = verdict)
// We have to use the original notary for the new transaction
val notary = latestRecord.state.notary
// Get and populate the new TransactionBuilder
// To destroy the old proposal state and replace with the new completion state.
// Also add the Completed command with keys of all parties to signal the Tx purpose
// to the Contract verify method.
val tx = TransactionBuilder(notary).
withItems(
latestRecord,
StateAndContract(newState, TRADE_APPROVAL_PROGRAM_ID),
Command(TradeApprovalContract.Commands.Completed(),
listOf(ourIdentity.owningKey, latestRecord.state.data.source.owningKey)))
tx.setTimeWindow(serviceHub.clock.instant(), 60.seconds)
// We can sign this transaction immediately as we have already checked all the fields and the decision
// is ultimately a manual one from the caller.
// As a SignedTransaction we can pass the data around certain that it cannot be modified,
// although we do require further signatures to complete the process.
val selfSignedTx = serviceHub.signInitialTransaction(tx)
private fun buildTradeProposal(ourInputStates: List<StateAndRef<Cash.State>>,
ourOutputState: List<Cash.State>,
theirInputStates: List<StateAndRef<Cash.State>>,
theirOutputState: List<Cash.State>): SignedTransaction {
// This is the correct way to create a TransactionBuilder,
// do not construct directly.
// We also set the notary to match the input notary
val builder = TransactionBuilder(ourInputStates.first().state.notary)
// Add the move commands and key to indicate all the respective owners and need to sign
val ourSigners = ourInputStates.map { it.state.data.owner.owningKey }.toSet()
val theirSigners = theirInputStates.map { it.state.data.owner.owningKey }.toSet()
builder.addCommand(Cash.Commands.Move(), (ourSigners + theirSigners).toList())
// Build and add the inputs and outputs
builder.withItems(*ourInputStates.toTypedArray())
builder.withItems(*theirInputStates.toTypedArray())
builder.withItems(*ourOutputState.map { StateAndContract(it, Cash.PROGRAM_ID) }.toTypedArray())
builder.withItems(*theirOutputState.map { StateAndContract(it, Cash.PROGRAM_ID) }.toTypedArray())
// We have already validated their response and trust our own data
// so we can sign. Note the returned SignedTransaction is still not fully signed
// and would not pass full verification yet.
return serviceHub.signInitialTransaction(builder, ourSigners.single())
}
Completing the SignedTransaction¶
Having created an initial TransactionBuilder
and converted this to a SignedTransaction
, the process of
verifying and forming a full SignedTransaction
begins and then completes with the
notarisation. In practice this is a relatively stereotypical process,
because assuming the SignedTransaction
is correctly constructed the
verification should be immediate. However, it is also important to
recheck the business details of any data received back from an external
node, because a malicious party could always modify the contents before
returning the transaction. Each remote flow should therefore check as
much as possible of the initial SignedTransaction
inside the unwrap
of
the receive before agreeing to sign. Any issues should immediately throw
an exception to abort the flow. Similarly the originator, should always
apply any new signatures to its original proposal to ensure the contents
of the transaction has not been altered by the remote parties.
The typical code therefore checks the received SignedTransaction
using the verifySignaturesExcept
method, excluding itself, the
notary and any other parties yet to apply their signature. The contents of the SignedTransaction
should be fully
verified further by expanding with toLedgerTransaction
and calling
verify
. Further context specific and business checks should then be
made, because the Contract.verify
is not allowed to access external
context. For example, the flow may need to check that the parties are the
right ones, or that the Command
present on the transaction is as
expected for this specific flow. An example of this from the demo code is:
// First we receive the verdict transaction signed by their single key
val completeTx = sourceSession.receive<SignedTransaction>().unwrap {
// Check the transaction is signed apart from our own key and the notary
it.verifySignaturesExcept(ourIdentity.owningKey, it.tx.notary!!.owningKey)
// Check the transaction data is correctly formed
val ltx = it.toLedgerTransaction(serviceHub, false)
ltx.verify()
// Confirm that this is the expected type of transaction
require(ltx.commands.single().value is TradeApprovalContract.Commands.Completed) {
"Transaction must represent a workflow completion"
}
// Check the context dependent parts of the transaction as the
// Contract verify method must not use serviceHub queries.
val state = ltx.outRef<TradeApprovalContract.State>(0)
require(serviceHub.myInfo.isLegalIdentity(state.state.data.source)) {
"Proposal not one of our original proposals"
}
require(state.state.data.counterparty == sourceSession.counterparty) {
"Proposal not for sent from correct source"
}
it
}
After verification the remote flow will return its signature to the
originator. The originator should apply that signature to the starting
SignedTransaction
and recheck the signatures match.
Committing the Transaction¶
Once all the signatures are applied to the SignedTransaction
, the
final steps are notarisation and ensuring that all nodes record the fully-signed transaction. The
code for this is standardised in the FinalityFlow
:
// Notarise and distribute the completed transaction.
subFlow(FinalityFlow(allPartySignedTx, sourceSession))
Partially Visible Transactions¶
The discussion so far has assumed that the parties need full visibility
of the transaction to sign. However, there may be situations where each
party needs to store private data for audit purposes, or for evidence to
a regulator, but does not wish to share that with the other trading
partner. The tear-off/Merkle tree support in Corda allows flows to send
portions of the full transaction to restrict visibility to remote
parties. To do this one can use the
SignedTransaction.buildFilteredTransaction
extension method to produce
a FilteredTransaction
. The elements of the SignedTransaction
which we wish to be hide will be replaced with their secure hash. The
overall transaction id is still provable from the
FilteredTransaction
preventing change of the private data, but we do
not expose that data to the other node directly. A full example of this
can be found in the NodeInterestRates
Oracle code from the
irs-demo
project which interacts with the RatesFixFlow
flow.
Also, refer to the Transaction tear-offs.
Writing flows¶
This article explains our approach to modelling business processes and the lower level network protocols that implement them. It explains how the platform’s flow framework is used, and takes you through the code for a simple 2-party asset trading flow which is included in the source.
Introduction¶
Shared distributed ledgers are interesting because they allow many different, mutually distrusting parties to share a single source of truth about the ownership of assets. Digitally signed transactions are used to update that shared ledger, and transactions may alter many states simultaneously and atomically.
Blockchain systems such as Bitcoin support the idea of building up a finished, signed transaction by passing around partially signed invalid transactions outside of the main network, and by doing this you can implement delivery versus payment such that there is no chance of settlement failure, because the movement of cash and the traded asset are performed atomically by the same transaction. To perform such a trade involves a multi-step flow in which messages are passed back and forth privately between parties, checked, signed and so on.
There are many benefits of this flow based design and some development complexities as well. Some of the development challenges include:
- Avoiding “callback hell” in which code that should ideally be sequential is turned into an unreadable mess due to the desire to avoid using up a thread for every flow instantiation.
- Surviving node shutdowns/restarts that may occur in the middle of the flow without complicating things. This implies that the state of the flow must be persisted to disk.
- Error handling.
- Message routing.
- Serialisation.
- Catching type errors, in which the developer gets temporarily confused and expects to receive/send one type of message when actually they need to receive/send another.
- Unit testing of the finished flow.
Actor frameworks can solve some of the above but they are often tightly bound to a particular messaging layer, and we would like to keep a clean separation. Additionally, they are typically not type safe, and don’t make persistence or writing sequential code much easier.
To put these problems in perspective, the payment channel protocol in the bitcoinj library, which allows bitcoins to be temporarily moved off-chain and traded at high speed between two parties in private, consists of about 7000 lines of Java and took over a month of full time work to develop. Most of that code is concerned with the details of persistence, message passing, lifecycle management, error handling and callback management. Because the business logic is quite spread out the code can be difficult to read and debug.
As small contract-specific trading flows are a common occurrence in finance, we provide a framework for the construction of them that automatically handles many of the concerns outlined above.
Theory¶
A continuation is a suspended stack frame stored in a regular object that can be passed around, serialised, unserialised and resumed from where it was suspended. This concept is sometimes referred to as “fibers”. This may sound abstract but don’t worry, the examples below will make it clearer. The JVM does not natively support continuations, so we implement them using a library called Quasar which works through behind-the-scenes bytecode rewriting. You don’t have to know how this works to benefit from it, however.
We use continuations for the following reasons:
- It allows us to write code that is free of callbacks, that looks like ordinary sequential code.
- A suspended continuation takes far less memory than a suspended thread. It can be as low as a few hundred bytes. In contrast a suspended Java thread stack can easily be 1mb in size.
- It frees the developer from thinking (much) about persistence and serialisation.
A state machine is a piece of code that moves through various states. These are not the same as states in the data model (that represent facts about the world on the ledger), but rather indicate different stages in the progression of a multi-stage flow. Typically writing a state machine would require the use of a big switch statement and some explicit variables to keep track of where you’re up to. The use of continuations avoids this hassle.
A two party trading flow¶
We would like to implement the “hello world” of shared transaction building flows: a seller wishes to sell some asset (e.g. some commercial paper) in return for cash. The buyer wishes to purchase the asset using his cash. They want the trade to be atomic so neither side is exposed to the risk of settlement failure. We assume that the buyer and seller have found each other and arranged the details on some exchange, or over the counter. The details of how the trade is arranged isn’t covered in this article.
Our flow has two parties (B and S for buyer and seller) and will proceed as follows:
- S sends a
StateAndRef
pointing to the state they want to sell to B, along with info about the price they require B to pay. - B sends to S a
SignedTransaction
that includes two inputs (the state owned by S, and cash owned by B) and three outputs (the state now owned by B, the cash now owned by S, and any change cash still owned by B). TheSignedTransaction
has a single signature from B but isn’t valid because it lacks a signature from S authorising movement of the asset. - S signs the transaction and sends it back to B.
- B finalises the transaction by sending it to the notary who checks the transaction for validity, recording the transaction in B’s local vault, and then sending it on to S who also checks it and commits the transaction to S’s local vault.
You can find the implementation of this flow in the file finance/workflows/src/main/kotlin/net/corda/finance/TwoPartyTradeFlow.kt
.
Assuming no malicious termination, they both end the flow being in possession of a valid, signed transaction that represents an atomic asset swap.
Note that it’s the seller who initiates contact with the buyer, not vice-versa as you might imagine.
We start by defining two classes that will contain the flow definition. We also pick what data will be used by each side.
注解
The code samples in this tutorial are only available in Kotlin, but you can use any JVM language to write them and the approach is the same.
object TwoPartyTradeFlow {
class UnacceptablePriceException(givenPrice: Amount<Currency>) : FlowException("Unacceptable price: $givenPrice")
class AssetMismatchException(val expectedTypeName: String, val typeName: String) : FlowException() {
override fun toString() = "The submitted asset didn't match the expected type: $expectedTypeName vs $typeName"
}
/**
* This object is serialised to the network and is the first flow message the seller sends to the buyer.
*
* @param payToIdentity anonymous identity of the seller, for payment to be sent to.
*/
@CordaSerializable
data class SellerTradeInfo(
val price: Amount<Currency>,
val payToIdentity: PartyAndCertificate
)
open class Seller(private val otherSideSession: FlowSession,
private val assetToSell: StateAndRef<OwnableState>,
private val price: Amount<Currency>,
private val myParty: PartyAndCertificate,
override val progressTracker: ProgressTracker = TwoPartyTradeFlow.Seller.tracker()) : FlowLogic<SignedTransaction>() {
companion object {
fun tracker() = ProgressTracker()
}
@Suspendable
override fun call(): SignedTransaction {
TODO()
}
}
open class Buyer(private val sellerSession: FlowSession,
private val notary: Party,
private val acceptablePrice: Amount<Currency>,
private val typeToBuy: Class<out OwnableState>,
private val anonymous: Boolean) : FlowLogic<SignedTransaction>() {
@Suspendable
override fun call(): SignedTransaction {
TODO()
}
}
}
This code defines several classes nested inside the main TwoPartyTradeFlow
singleton. Some of the classes are
simply flow messages or exceptions. The other two represent the buyer and seller side of the flow.
Going through the data needed to become a seller, we have:
otherSideSession: FlowSession
- a flow session for communication with the buyerassetToSell: StateAndRef<OwnableState>
- a pointer to the ledger entry that represents the thing being soldprice: Amount<Currency>
- the agreed on price that the asset is being sold for (without an issuer constraint)myParty: PartyAndCertificate
- the certificate representing the party that controls the asset being sold
And for the buyer:
sellerSession: FlowSession
- a flow session for communication with the sellernotary: Party
- the entry in the network map for the chosen notary. See “Notaries” for more information on notariesacceptablePrice: Amount<Currency>
- the price that was agreed upon out of band. If the seller specifies a price less than or equal to this, then the trade will go aheadtypeToBuy: Class<out OwnableState>
- the type of state that is being purchased. This is used to check that the sell side of the flow isn’t trying to sell us the wrong thing, whether by accident or on purposeanonymous: Boolean
- whether to generate a fresh, anonymous public key for the transaction
Alright, so using this flow shouldn’t be too hard: in the simplest case we can just create a Buyer or Seller
with the details of the trade, depending on who we are. We then have to start the flow in some way. Just
calling the call
function ourselves won’t work: instead we need to ask the framework to start the flow for
us. More on that in a moment.
Suspendable functions¶
The call
function of the buyer/seller classes is marked with the @Suspendable
annotation. What does this mean?
As mentioned above, our flow framework will at points suspend the code and serialise it to disk. For this to work,
any methods on the call stack must have been pre-marked as @Suspendable
so the bytecode rewriter knows to modify
the underlying code to support this new feature. A flow is suspended when calling either receive
, send
or
sendAndReceive
which we will learn more about below. For now, just be aware that when one of these methods is
invoked, all methods on the stack must have been marked. If you forget, then in the unit test environment you will
get a useful error message telling you which methods you didn’t mark. The fix is simple enough: just add the annotation
and try again.
注解
Java 9 is likely to remove this pre-marking requirement completely.
Whitelisted classes with the Corda node¶
For security reasons, we do not want Corda nodes to be able to just receive instances of any class on the classpath
via messaging, since this has been exploited in other Java application containers in the past. Instead, we require
every class contained in messages to be whitelisted. Some classes are whitelisted by default (see DefaultWhitelist
),
but others outside of that set need to be whitelisted either by using the annotation @CordaSerializable
or via the
plugin framework. See Object serialization. You can see above that the SellerTradeInfo
has been annotated.
Starting your flow¶
The StateMachineManager
is the class responsible for taking care of all running flows in a node. It knows
how to register handlers with the messaging system (see “网络和消息”) and iterate the right state machine
when messages arrive. It provides the send/receive/sendAndReceive calls that let the code request network
interaction and it will save/restore serialised versions of the fiber at the right times.
Flows can be invoked in several ways. For instance, they can be triggered by scheduled events (in which case they need to
be annotated with @SchedulableFlow
), see “Event scheduling” to learn more about this. They can also be triggered
directly via the node’s RPC API from your app code (in which case they need to be annotated with StartableByRPC). It’s
possible for a flow to be of both types.
You request a flow to be invoked by using the CordaRPCOps.startFlowDynamic
method. This takes a
Java reflection Class
object that describes the flow class to use (in this case, either Buyer
or Seller
).
It also takes a set of arguments to pass to the constructor. Because it’s possible for flow invocations to
be requested by untrusted code (e.g. a state that you have been sent), the types that can be passed into the
flow are checked against a whitelist, which can be extended by apps themselves at load time. There are also a series
of inlined Kotlin extension functions of the form CordaRPCOps.startFlow
which help with invoking flows in a type
safe manner.
The process of starting a flow returns a FlowHandle
that you can use to observe the result, and which also contains
a permanent identifier for the invoked flow in the form of the StateMachineRunId
. Should you also wish to track the
progress of your flow (see Progress tracking) then you can invoke your flow instead using
CordaRPCOps.startTrackedFlowDynamic
or any of its corresponding CordaRPCOps.startTrackedFlow
extension functions.
These will return a FlowProgressHandle
, which is just like a FlowHandle
except that it also contains an observable
progress
field.
注解
The developer must then either subscribe to this progress
observable or invoke the notUsed()
extension
function for it. Otherwise the unused observable will waste resources back in the node.
Implementing the seller¶
Let’s implement the Seller.call
method that will be run when the flow is invoked.
@Suspendable
override fun call(): SignedTransaction {
progressTracker.currentStep = AWAITING_PROPOSAL
// Make the first message we'll send to kick off the flow.
val hello = SellerTradeInfo(price, myParty)
// What we get back from the other side is a transaction that *might* be valid and acceptable to us,
// but we must check it out thoroughly before we sign!
// SendTransactionFlow allows seller to access our data to resolve the transaction.
subFlow(SendStateAndRefFlow(otherSideSession, listOf(assetToSell)))
otherSideSession.send(hello)
// Verify and sign the transaction.
progressTracker.currentStep = VERIFYING_AND_SIGNING
// DOCSTART 07
// Sync identities to ensure we know all of the identities involved in the transaction we're about to
// be asked to sign
subFlow(IdentitySyncFlow.Receive(otherSideSession))
// DOCEND 07
// DOCSTART 5
val signTransactionFlow = object : SignTransactionFlow(otherSideSession, VERIFYING_AND_SIGNING.childProgressTracker()) {
override fun checkTransaction(stx: SignedTransaction) {
// Verify that we know who all the participants in the transaction are
val states: Iterable<ContractState> = serviceHub.loadStates(stx.tx.inputs.toSet()).map { it.state.data } + stx.tx.outputs.map { it.data }
states.forEach { state ->
state.participants.forEach { anon ->
require(serviceHub.identityService.wellKnownPartyFromAnonymous(anon) != null) {
"Transaction state $state involves unknown participant $anon"
}
}
}
if (stx.tx.outputStates.sumCashBy(myParty.party).withoutIssuer() != price)
throw FlowException("Transaction is not sending us the right amount of cash")
}
}
val txId = subFlow(signTransactionFlow).id
// DOCEND 5
return subFlow(ReceiveFinalityFlow(otherSideSession, expectedTxId = txId))
}
We start by sending information about the asset we wish to sell to the buyer. We fill out the initial flow message with
the trade info, and then call otherSideSession.send
. which takes two arguments:
- The party we wish to send the message to
- The payload being sent
otherSideSession.send
will serialise the payload and send it to the other party automatically.
Next, we call a subflow called IdentitySyncFlow.Receive
(see Sub-flows). IdentitySyncFlow.Receive
ensures that our node can de-anonymise any confidential identities in the transaction it’s about to be asked to sign.
Next, we call another subflow called SignTransactionFlow
. SignTransactionFlow
automates the process of:
- Receiving a proposed trade transaction from the buyer, with the buyer’s signature attached.
- Checking that the proposed transaction is valid.
- Calculating and attaching our own signature so that the transaction is now signed by both the buyer and the seller.
- Sending the transaction back to the buyer.
The transaction then needs to be finalized. This is the the process of sending the transaction to a notary to assert (with another signature) that the time-window in the transaction (if any) is valid and there are no double spends. In this flow, finalization is handled by the buyer, we just wait for them to send it to us. It will have the same ID as the one we started with but more signatures.
Implementing the buyer¶
OK, let’s do the same for the buyer side:
@Suspendable
override fun call(): SignedTransaction {
// Wait for a trade request to come in from the other party.
progressTracker.currentStep = RECEIVING
val (assetForSale, tradeRequest) = receiveAndValidateTradeRequest()
// Create the identity we'll be paying to, and send the counterparty proof we own the identity
val buyerAnonymousIdentity = if (anonymous)
serviceHub.keyManagementService.freshKeyAndCert(ourIdentityAndCert, false)
else
ourIdentityAndCert
// Put together a proposed transaction that performs the trade, and sign it.
progressTracker.currentStep = SIGNING
val (ptx, cashSigningPubKeys) = assembleSharedTX(assetForSale, tradeRequest, buyerAnonymousIdentity)
// DOCSTART 6
// Now sign the transaction with whatever keys we need to move the cash.
val partSignedTx = serviceHub.signInitialTransaction(ptx, cashSigningPubKeys)
// Sync up confidential identities in the transaction with our counterparty
subFlow(IdentitySyncFlow.Send(sellerSession, ptx.toWireTransaction(serviceHub)))
// Send the signed transaction to the seller, who must then sign it themselves and commit
// it to the ledger by sending it to the notary.
progressTracker.currentStep = COLLECTING_SIGNATURES
val sellerSignature = subFlow(CollectSignatureFlow(partSignedTx, sellerSession, sellerSession.counterparty.owningKey))
val twiceSignedTx = partSignedTx + sellerSignature
// DOCEND 6
// Notarise and record the transaction.
progressTracker.currentStep = RECORDING
return subFlow(FinalityFlow(twiceSignedTx, sellerSession))
}
@Suspendable
private fun receiveAndValidateTradeRequest(): Pair<StateAndRef<OwnableState>, SellerTradeInfo> {
val assetForSale = subFlow(ReceiveStateAndRefFlow<OwnableState>(sellerSession)).single()
return assetForSale to sellerSession.receive<SellerTradeInfo>().unwrap {
progressTracker.currentStep = VERIFYING
// What is the seller trying to sell us?
val asset = assetForSale.state.data
val assetTypeName = asset.javaClass.name
// The asset must either be owned by the well known identity of the counterparty, or we must be able to
// prove the owner is a confidential identity of the counterparty.
val assetForSaleIdentity = serviceHub.identityService.wellKnownPartyFromAnonymous(asset.owner)
require(assetForSaleIdentity == sellerSession.counterparty){"Well known identity lookup returned identity that does not match counterparty"}
// Register the identity we're about to send payment to. This shouldn't be the same as the asset owner
// identity, so that anonymity is enforced.
val wellKnownPayToIdentity = serviceHub.identityService.verifyAndRegisterIdentity(it.payToIdentity) ?: it.payToIdentity
require(wellKnownPayToIdentity.party == sellerSession.counterparty) { "Well known identity to pay to must match counterparty identity" }
if (it.price > acceptablePrice)
throw UnacceptablePriceException(it.price)
if (!typeToBuy.isInstance(asset))
throw AssetMismatchException(typeToBuy.name, assetTypeName)
it
}
}
@Suspendable
private fun assembleSharedTX(assetForSale: StateAndRef<OwnableState>, tradeRequest: SellerTradeInfo, buyerAnonymousIdentity: PartyAndCertificate): SharedTx {
val ptx = TransactionBuilder(notary)
// Add input and output states for the movement of cash, by using the Cash contract to generate the states
val (tx, cashSigningPubKeys) = CashUtils.generateSpend(serviceHub, ptx, tradeRequest.price, ourIdentityAndCert, tradeRequest.payToIdentity.party)
// Add inputs/outputs/a command for the movement of the asset.
tx.addInputState(assetForSale)
val (command, state) = assetForSale.state.data.withNewOwner(buyerAnonymousIdentity.party)
tx.addOutputState(state, assetForSale.state.contract, assetForSale.state.notary)
tx.addCommand(command, assetForSale.state.data.owner.owningKey)
// We set the transaction's time-window: it may be that none of the contracts need this!
// But it can't hurt to have one.
val currentTime = serviceHub.clock.instant()
tx.setTimeWindow(currentTime, 30.seconds)
return SharedTx(tx, cashSigningPubKeys)
}
This code is longer but no more complicated. Here are some things to pay attention to:
- We do some sanity checking on the proposed trade transaction received from the seller to ensure we’re being offered what we expected to be offered.
- We create a cash spend using
Cash.generateSpend
. You can read the vault documentation to learn more about this. - We access the service hub as needed to access things that are transient and may change or be recreated whilst a flow is suspended, such as the wallet or the network map.
- We call
CollectSignaturesFlow
as a subflow to send the unfinished, still-invalid transaction to the seller so they can sign it and send it back to us. - Last, we call
FinalityFlow
as a subflow to finalize the transaction.
As you can see, the flow logic is straightforward and does not contain any callbacks or network glue code, despite the fact that it takes minimal resources and can survive node restarts.
Flow sessions¶
It will be useful to describe how flows communicate with each other. A node may have many flows running at the same time, and perhaps communicating with the same counterparty node but for different purposes. Therefore flows need a way to segregate communication channels so that concurrent conversations between flows on the same set of nodes do not interfere with each other.
To achieve this in order to communicate with a counterparty one needs to first initiate such a session with a Party
using initiateFlow
, which returns a FlowSession
object, identifying this communication. Subsequently the first
actual communication will kick off a counter-flow on the other side, receiving a “reply” session object. A session ends
when either flow ends, whether as expected or pre-maturely. If a flow ends pre-maturely then the other side will be
notified of that and they will also end, as the whole point of flows is a known sequence of message transfers. Flows end
pre-maturely due to exceptions, and as described above, if that exception is FlowException
or a sub-type then it
will propagate to the other side. Any other exception will not propagate.
Taking a step back, we mentioned that the other side has to accept the session request for there to be a communication
channel. A node accepts a session request if it has registered the flow type (the fully-qualified class name) that is
making the request - each session initiation includes the initiating flow type. The initiated (server) flow must name the
initiating (client) flow using the @InitiatedBy
annotation and passing the class name that will be starting the
flow session as the annotation parameter.
Sub-flows¶
Flows can be composed via nesting. Invoking a sub-flow looks similar to an ordinary function call:
@Suspendable
fun call() {
val unnotarisedTransaction = ...
subFlow(FinalityFlow(unnotarisedTransaction))
}
@Suspendable
public void call() throws FlowException {
SignedTransaction unnotarisedTransaction = ...
subFlow(new FinalityFlow(unnotarisedTransaction))
}
Let’s take a look at the three subflows we invoke in this flow.
FinalityFlow¶
On the buyer side, we use FinalityFlow
to finalise the transaction. It will:
- Send the transaction to the chosen notary and, if necessary, satisfy the notary that the transaction is valid.
- Record the transaction in the local vault, if it is relevant (i.e. involves the owner of the node).
- Send the fully signed transaction to the other participants for recording as well.
On the seller side we use ReceiveFinalityFlow
to receive and record the finalised transaction.
警告
If the buyer stops before sending the finalised transaction to the seller, the buyer is left with a valid transaction but the seller isn’t, so they don’t get the cash! This sort of thing is not always a risk (as the buyer may not gain anything from that sort of behaviour except a lawsuit), but if it is, a future version of the platform will allow you to ask the notary to send you the transaction as well, in case your counterparty does not. This is not the default because it reveals more private info to the notary.
We simply create the flow object via its constructor, and then pass it to the subFlow
method which
returns the result of the flow’s execution directly. Behind the scenes all this is doing is wiring up progress
tracking (discussed more below) and then running the object’s call
method. Because the sub-flow might suspend,
we must mark the method that invokes it as suspendable.
Within FinalityFlow, we use a further sub-flow called ReceiveTransactionFlow
. This is responsible for downloading
and checking all the dependencies of a transaction, which in Corda are always retrievable from the party that sent you a
transaction that uses them. This flow returns a list of LedgerTransaction
objects.
注解
Transaction dependency resolution assumes that the peer you got the transaction from has all of the dependencies itself. It must do, otherwise it could not have convinced itself that the dependencies were themselves valid. It’s important to realise that requesting only the transactions we require is a privacy leak, because if we don’t download a transaction from the peer, they know we must have already seen it before. Fixing this privacy leak will come later.
CollectSignaturesFlow/SignTransactionFlow¶
We also invoke two other subflows:
CollectSignaturesFlow
, on the buyer sideSignTransactionFlow
, on the seller side
These flows communicate to gather all the required signatures for the proposed transaction. CollectSignaturesFlow
will:
- Verify any signatures collected on the transaction so far
- Verify the transaction itself
- Send the transaction to the remaining required signers and receive back their signatures
- Verify the collected signatures
SignTransactionFlow
responds by:
- Receiving the partially-signed transaction off the wire
- Verifying the existing signatures
- Resolving the transaction’s dependencies
- Verifying the transaction itself
- Running any custom validation logic
- Sending their signature back to the buyer
- Waiting for the transaction to be recorded in their vault
We cannot instantiate SignTransactionFlow
itself, as it’s an abstract class. Instead, we need to subclass it and
override checkTransaction()
to add our own custom validation logic:
val signTransactionFlow = object : SignTransactionFlow(otherSideSession, VERIFYING_AND_SIGNING.childProgressTracker()) {
override fun checkTransaction(stx: SignedTransaction) {
// Verify that we know who all the participants in the transaction are
val states: Iterable<ContractState> = serviceHub.loadStates(stx.tx.inputs.toSet()).map { it.state.data } + stx.tx.outputs.map { it.data }
states.forEach { state ->
state.participants.forEach { anon ->
require(serviceHub.identityService.wellKnownPartyFromAnonymous(anon) != null) {
"Transaction state $state involves unknown participant $anon"
}
}
}
if (stx.tx.outputStates.sumCashBy(myParty.party).withoutIssuer() != price)
throw FlowException("Transaction is not sending us the right amount of cash")
}
}
val txId = subFlow(signTransactionFlow).id
In this case, our custom validation logic ensures that the amount of cash outputs in the transaction equals the price of the asset.
Persisting flows¶
If you look at the code for FinalityFlow
, CollectSignaturesFlow
and SignTransactionFlow
, you’ll see calls
to both receive
and sendAndReceive
. Once either of these methods is called, the call
method will be
suspended into a continuation and saved to persistent storage. If the node crashes or is restarted, the flow will
effectively continue as if nothing had happened. Your code may remain blocked inside such a call for seconds,
minutes, hours or even days in the case of a flow that needs human interaction!
注解
There are a couple of rules you need to bear in mind when writing a class that will be used as a continuation. The first is that anything on the stack when the function is suspended will be stored into the heap and kept alive by the garbage collector. So try to avoid keeping enormous data structures alive unless you really have to. You can always use private methods to keep the stack uncluttered with temporary variables, or to avoid objects that Kryo is not able to serialise correctly.
The second is that as well as being kept on the heap, objects reachable from the stack will be serialised. The state of the function call may be resurrected much later! Kryo doesn’t require objects be marked as serialisable, but even so, doing things like creating threads from inside these calls would be a bad idea. They should only contain business logic and only do I/O via the methods exposed by the flow framework.
It’s OK to keep references around to many large internal node services though: these will be serialised using a special token that’s recognised by the platform, and wired up to the right instance when the continuation is loaded off disk again.
警告
If a node has flows still in a suspended state, with flow continuations written to disk, it will not be possible to upgrade that node to a new version of Corda or your app, because flows must be completely “drained” before an upgrade can be performed, and must reach a finished state for draining to complete (see Flow 排空 for details). While there are mechanisms for “evolving” serialised data held in the vault, there are no equivalent mechanisms for updating serialised checkpoint data. For this reason it is not a good idea to design flows with the intention that they should remain in a suspended state for a long period of time, as this will obstruct necessary upgrades to Corda itself. Any long-running business process should therefore be structured as a series of discrete transactions, written to the vault, rather than a single flow persisted over time through the flow checkpointing mechanism.
receive
and sendAndReceive
return a simple wrapper class, UntrustworthyData<T>
, which is
just a marker class that reminds us that the data came from a potentially malicious external source and may have been
tampered with or be unexpected in other ways. It doesn’t add any functionality, but acts as a reminder to “scrub”
the data before use.
Exception handling¶
Flows can throw exceptions to prematurely terminate their execution. The flow framework gives special treatment to
FlowException
and its subtypes. These exceptions are treated as error responses of the flow and are propagated
to all counterparties it is communicating with. The receiving flows will throw the same exception the next time they do
a receive
or sendAndReceive
and thus end the flow session. If the receiver was invoked via subFlow
then the exception can be caught there enabling re-invocation of the sub-flow.
If the exception thrown by the erroring flow is not a FlowException
it will still terminate but will not propagate to
the other counterparties. Instead they will be informed the flow has terminated and will themselves be terminated with a
generic exception.
注解
A future version will extend this to give the node administrator more control on what to do with such erroring flows.
Throwing a FlowException
enables a flow to reject a piece of data it has received back to the sender. This is typically
done in the unwrap
method of the received UntrustworthyData
. In the above example the seller checks the price
and throws FlowException
if it’s invalid. It’s then up to the buyer to either try again with a better price or give up.
Progress tracking¶
Not shown in the code snippets above is the usage of the ProgressTracker
API. Progress tracking exports information
from a flow about where it’s got up to in such a way that observers can render it in a useful manner to humans who
may need to be informed. It may be rendered via an API, in a GUI, onto a terminal window, etc.
A ProgressTracker
is constructed with a series of Step
objects, where each step is an object representing a
stage in a piece of work. It is therefore typical to use singletons that subclass Step
, which may be defined easily
in one line when using Kotlin. Typical steps might be “Waiting for response from peer”, “Waiting for signature to be
approved”, “Downloading and verifying data” etc.
A flow might declare some steps with code inside the flow class like this:
object RECEIVING : ProgressTracker.Step("Waiting for seller trading info")
object VERIFYING : ProgressTracker.Step("Verifying seller assets")
object SIGNING : ProgressTracker.Step("Generating and signing transaction proposal")
object COLLECTING_SIGNATURES : ProgressTracker.Step("Collecting signatures from other parties") {
override fun childProgressTracker() = CollectSignaturesFlow.tracker()
}
object RECORDING : ProgressTracker.Step("Recording completed transaction") {
// TODO: Currently triggers a race condition on Team City. See https://github.com/corda/corda/issues/733.
// override fun childProgressTracker() = FinalityFlow.tracker()
}
override val progressTracker = ProgressTracker(RECEIVING, VERIFYING, SIGNING, COLLECTING_SIGNATURES, RECORDING)
private final ProgressTracker progressTracker = new ProgressTracker(
RECEIVING,
VERIFYING,
SIGNING,
COLLECTING_SIGNATURES,
RECORDING
);
private static final ProgressTracker.Step RECEIVING = new ProgressTracker.Step(
"Waiting for seller trading info");
private static final ProgressTracker.Step VERIFYING = new ProgressTracker.Step(
"Verifying seller assets");
private static final ProgressTracker.Step SIGNING = new ProgressTracker.Step(
"Generating and signing transaction proposal");
private static final ProgressTracker.Step COLLECTING_SIGNATURES = new ProgressTracker.Step(
"Collecting signatures from other parties");
private static final ProgressTracker.Step RECORDING = new ProgressTracker.Step(
"Recording completed transaction");
Each step exposes a label. By defining your own step types, you can export progress in a way that’s both human readable and machine readable.
Progress trackers are hierarchical. Each step can be the parent for another tracker. By setting
Step.childProgressTracker
, a tree of steps can be created. It’s allowed to alter the hierarchy at runtime, on the
fly, and the progress renderers will adapt to that properly. This can be helpful when you don’t fully know ahead of
time what steps will be required. If you do know what is required, configuring as much of the hierarchy ahead of time
is a good idea, as that will help the users see what is coming up. You can pre-configure steps by overriding the
Step
class like this:
object VERIFYING_AND_SIGNING : ProgressTracker.Step("Verifying and signing transaction proposal") {
override fun childProgressTracker() = SignTransactionFlow.tracker()
}
private static final ProgressTracker.Step VERIFYING_AND_SIGNING = new ProgressTracker.Step("Verifying and signing transaction proposal") {
@Nullable
@Override
public ProgressTracker childProgressTracker() {
return SignTransactionFlow.Companion.tracker();
}
};
Every tracker has not only the steps given to it at construction time, but also the singleton
ProgressTracker.UNSTARTED
step and the ProgressTracker.DONE
step. Once a tracker has become DONE
its
position may not be modified again (because e.g. the UI may have been removed/cleaned up), but until that point, the
position can be set to any arbitrary set both forwards and backwards. Steps may be skipped, repeated, etc. Note that
rolling the current step backwards will delete any progress trackers that are children of the steps being reversed, on
the assumption that those subtasks will have to be repeated.
Trackers provide an Rx observable which streams changes to the hierarchy. The top level observable exposes all the events generated by its children as well. The changes are represented by objects indicating whether the change is one of position (i.e. progress), structure (i.e. new subtasks being added/removed) or some other aspect of rendering (i.e. a step has changed in some way and is requesting a re-render).
The flow framework is somewhat integrated with this API. Each FlowLogic
may optionally provide a tracker by
overriding the progressTracker
property (getProgressTracker
method in Java). If the
FlowLogic.subFlow
method is used, then the tracker of the sub-flow will be made a child of the current
step in the parent flow automatically, if the parent is using tracking in the first place. The framework will also
automatically set the current step to DONE
for you, when the flow is finished.
Because a flow may sometimes wish to configure the children in its progress hierarchy before the sub-flow is constructed, for sub-flows that always follow the same outline regardless of their parameters it’s conventional to define a companion object/static method (for Kotlin/Java respectively) that constructs a tracker, and then allow the sub-flow to have the tracker it will use be passed in as a parameter. This allows all trackers to be built and linked ahead of time.
In future, the progress tracking framework will become a vital part of how exceptions, errors, and other faults are surfaced to human operators for investigation and resolution.
Future features¶
The flow framework is a key part of the platform and will be extended in major ways in future. Here are some of the features we have planned:
- Exception management, with a “flow hospital” tool to manually provide solutions to unavoidable problems (e.g. the other side doesn’t know the trade)
- Being able to interact with people, either via some sort of external ticketing system, or email, or a custom UI. For example to implement human transaction authorisations
- A standard library of flows that can be easily sub-classed by local developers in order to integrate internal reporting logic, or anything else that might be required as part of a communications lifecycle
Writing flow tests¶
A flow can be a fairly complex thing that interacts with many services and other parties over the network. That means unit testing one requires some infrastructure to provide lightweight mock implementations. The MockNetwork provides this testing infrastructure layer; you can find this class in the test-utils module.
A good example to examine for learning how to unit test flows is the ResolveTransactionsFlow
tests. This
flow takes care of downloading and verifying transaction graphs, with all the needed dependencies. We start
with this basic skeleton:
class ResolveTransactionsFlowTest {
private lateinit var mockNet: MockNetwork
private lateinit var notaryNode: StartedMockNode
private lateinit var megaCorpNode: StartedMockNode
private lateinit var miniCorpNode: StartedMockNode
private lateinit var megaCorp: Party
private lateinit var miniCorp: Party
private lateinit var notary: Party
@Before
fun setup() {
mockNet = MockNetwork(MockNetworkParameters(cordappsForAllNodes = listOf(DUMMY_CONTRACTS_CORDAPP, enclosedCordapp())))
notaryNode = mockNet.defaultNotaryNode
megaCorpNode = mockNet.createPartyNode(CordaX500Name("MegaCorp", "London", "GB"))
miniCorpNode = mockNet.createPartyNode(CordaX500Name("MiniCorp", "London", "GB"))
notary = mockNet.defaultNotaryIdentity
megaCorp = megaCorpNode.info.singleIdentity()
miniCorp = miniCorpNode.info.singleIdentity()
}
@After
fun tearDown() {
mockNet.stopNodes()
}
We create a mock network in our @Before
setup method and create a couple of nodes. We also record the identity
of the notary in our test network, which will come in handy later. We also tidy up when we’re done.
Next, we write a test case:
@Test
fun `resolve from two hashes`() {
val (stx1, stx2) = makeTransactions()
val p = TestFlow(setOf(stx2.id), megaCorp)
val future = miniCorpNode.startFlow(p)
mockNet.runNetwork()
future.getOrThrow()
miniCorpNode.transaction {
assertEquals(stx1, miniCorpNode.services.validatedTransactions.getTransaction(stx1.id))
assertEquals(stx2, miniCorpNode.services.validatedTransactions.getTransaction(stx2.id))
}
}
We’ll take a look at the makeTransactions
function in a moment. For now, it’s enough to know that it returns two
SignedTransaction
objects, the second of which spends the first. Both transactions are known by MegaCorpNode but
not MiniCorpNode.
The test logic is simple enough: we create the flow, giving it MegaCorpNode’s identity as the target to talk to.
Then we start it on MiniCorpNode and use the mockNet.runNetwork()
method to bounce messages around until things have
settled (i.e. there are no more messages waiting to be delivered). All this is done using an in memory message
routing implementation that is fast to initialise and use. Finally, we obtain the result of the flow and do
some tests on it. We also check the contents of MiniCorpNode’s database to see that the flow had the intended effect
on the node’s persistent state.
Here’s what makeTransactions
looks like:
private fun makeTransactions(signFirstTX: Boolean = true, withAttachment: SecureHash? = null): Pair<SignedTransaction, SignedTransaction> {
// Make a chain of custody of dummy states and insert into node A.
val dummy1: SignedTransaction = DummyContract.generateInitial(0, notary, megaCorp.ref(1)).let {
if (withAttachment != null)
it.addAttachment(withAttachment)
when (signFirstTX) {
true -> {
val ptx = megaCorpNode.services.signInitialTransaction(it)
notaryNode.services.addSignature(ptx, notary.owningKey)
}
false -> {
notaryNode.services.signInitialTransaction(it, notary.owningKey)
}
}
}
megaCorpNode.transaction {
megaCorpNode.services.recordTransactions(dummy1)
}
val dummy2: SignedTransaction = DummyContract.move(dummy1.tx.outRef(0), miniCorp).let {
val ptx = megaCorpNode.services.signInitialTransaction(it)
notaryNode.services.addSignature(ptx, notary.owningKey)
}
megaCorpNode.transaction {
megaCorpNode.services.recordTransactions(dummy2)
}
return Pair(dummy1, dummy2)
}
We’re using the DummyContract
, a simple test smart contract which stores a single number in its states, along
with ownership and issuer information. You can issue such states, exit them and re-assign ownership (move them).
It doesn’t do anything else. This code simply creates a transaction that issues a dummy state (the issuer is
MEGA_CORP
, a pre-defined unit test identity), signs it with the test notary and MegaCorp keys and then
converts the builder to the final SignedTransaction
. It then does so again, but this time instead of issuing
it re-assigns ownership instead. The chain of two transactions is finally committed to MegaCorpNode by sending them
directly to the megaCorpNode.services.recordTransaction
method (note that this method doesn’t check the
transactions are valid) inside a database.transaction
. All node flows run within a database transaction in the
nodes themselves, but any time we need to use the database directly from a unit test, you need to provide a database
transaction as shown here.
Writing oracle services¶
This article covers oracles: network services that link the ledger to the outside world by providing facts that affect the validity of transactions.
The current prototype includes an example oracle that provides an interest rate fixing service. It is used by the IRS trading demo app.
Introduction to oracles¶
Oracles are a key concept in the block chain/decentralised ledger space. They can be essential for many kinds of application, because we often wish to condition the validity of a transaction on some fact being true or false, but the ledger itself has a design that is essentially functional: all transactions are pure and immutable. Phrased another way, a contract cannot perform any input/output or depend on any state outside of the transaction itself. For example, there is no way to download a web page or interact with the user from within a contract. It must be this way because everyone must be able to independently check a transaction and arrive at an identical conclusion regarding its validity for the ledger to maintain its integrity: if a transaction could evaluate to “valid” on one computer and then “invalid” a few minutes later on a different computer, the entire shared ledger concept wouldn’t work.
But transaction validity does often depend on data from the outside world - verifying that an interest rate swap is paying out correctly may require data on interest rates, verifying that a loan has reached maturity requires knowledge about the current time, knowing which side of a bet receives the payment may require arbitrary facts about the real world (e.g. the bankruptcy or solvency of a company or country), and so on.
We can solve this problem by introducing services that create digitally signed data structures which assert facts. These structures can then be used as an input to a transaction and distributed with the transaction data itself. Because the statements are themselves immutable and signed, it is impossible for an oracle to change its mind later and invalidate transactions that were previously found to be valid. In contrast, consider what would happen if a contract could do an HTTP request: it’s possible that an answer would change after being downloaded, resulting in loss of consensus.
The two basic approaches¶
The architecture provides two ways of implementing oracles with different tradeoffs:
- Using commands
- Using attachments
When a fact is encoded in a command, it is embedded in the transaction itself. The oracle then acts as a co-signer to the entire transaction. The oracle’s signature is valid only for that transaction, and thus even if a fact (like a stock price) does not change, every transaction that incorporates that fact must go back to the oracle for signing.
When a fact is encoded as an attachment, it is a separate object to the transaction and is referred to by hash. Nodes download attachments from peers at the same time as they download transactions, unless of course the node has already seen that attachment, in which case it won’t fetch it again. Contracts have access to the contents of attachments when they run.
注解
Currently attachments do not support digital signing, but this is a planned feature.
As you can see, both approaches share a few things: they both allow arbitrary binary data to be provided to transactions (and thus contracts). The primary difference is whether the data is a freely reusable, standalone object or whether it’s integrated with a transaction.
Here’s a quick way to decide which approach makes more sense for your data source:
- Is your data continuously changing, like a stock price, the current time, etc? If yes, use a command.
- Is your data commercially valuable, like a feed which you are not allowed to resell unless it’s incorporated into a business deal? If yes, use a command, so you can charge money for signing the same fact in each unique business context.
- Is your data very small, like a single number? If yes, use a command.
- Is your data large, static and commercially worthless, for instance, a holiday calendar? If yes, use an attachment.
- Is your data intended for human consumption, like a PDF of legal prose, or an Excel spreadsheet? If yes, use an attachment.
Asserting continuously varying data¶
Let’s look at the interest rates oracle that can be found in the NodeInterestRates
file. This is an example of
an oracle that uses a command because the current interest rate fix is a constantly changing fact.
The obvious way to implement such a service is this:
- The creator of the transaction that depends on the interest rate sends it to the oracle.
- The oracle inserts a command with the rate and signs the transaction.
- The oracle sends it back.
But this has a problem - it would mean that the oracle has to be the first entity to sign the transaction, which might impose ordering constraints we don’t want to deal with (being able to get all parties to sign in parallel is a very nice thing). So the way we actually implement it is like this:
- The creator of the transaction that depends on the interest rate asks for the current rate. They can abort at this point if they want to.
- They insert a command with that rate and the time it was obtained into the transaction.
- They then send it to the oracle for signing, along with everyone else, potentially in parallel. The oracle checks that the command has the correct data for the asserted time, and signs if so.
This same technique can be adapted to other types of oracle.
The oracle consists of a core class that implements the query/sign operations (for easy unit testing), and then a separate class that binds it to the network layer.
Here is an extract from the NodeInterestRates.Oracle
class and supporting types:
/** A [FixOf] identifies the question side of a fix: what day, tenor and type of fix ("LIBOR", "EURIBOR" etc) */
@CordaSerializable
data class FixOf(val name: String, val forDay: LocalDate, val ofTenor: Tenor)
/** A [Fix] represents a named interest rate, on a given day, for a given duration. It can be embedded in a tx. */
data class Fix(val of: FixOf, val value: BigDecimal) : CommandData
class Oracle {
fun query(queries: List<FixOf>): List<Fix>
fun sign(ftx: FilteredTransaction): TransactionSignature
}
The fix contains a timestamp (the forDay
field) that identifies the version of the data being requested. Since
there can be an arbitrary delay between a fix being requested via query
and the signature being requested via
sign
, this timestamp allows the Oracle to know which, potentially historical, value it is being asked to sign for. This is an
important technique for continuously varying data.
Hiding transaction data from the oracle¶
Because the transaction is sent to the oracle for signing, ordinarily the oracle would be able to see the entire contents
of that transaction including the inputs, output contract states and all the commands, not just the one (in this case)
relevant command. This is an obvious privacy leak for the other participants. We currently solve this using a
FilteredTransaction
, which implements a Merkle Tree. These reveal only the necessary parts of the transaction to the
oracle but still allow it to sign it by providing the Merkle hashes for the remaining parts. See Oracles
for more details.
Pay-per-play oracles¶
Because the signature covers the transaction, and transactions may end up being forwarded anywhere, the fact itself
is independently checkable. However, this approach can still be useful when the data itself costs money, because the act
of issuing the signature in the first place can be charged for (e.g. by requiring the submission of a fresh
Cash.State
that has been re-assigned to a key owned by the oracle service). Because the signature covers the
transaction and not only the fact, this allows for a kind of weak pseudo-DRM over data feeds. Whilst a
contract could in theory include a transaction parsing and signature checking library, writing a contract in this way
would be conclusive evidence of intent to disobey the rules of the service (res ipsa loquitur). In an environment
where parties are legally identifiable, usage of such a contract would by itself be sufficient to trigger some sort of
punishment.
Implementing an oracle with continuously varying data¶
Implement the core classes¶
The key is to implement your oracle in a similar way to the NodeInterestRates.Oracle
outline we gave above with
both a query
and a sign
method. Typically you would want one class that encapsulates the parameters to the query
method (FixOf
, above), and a CommandData
implementation (Fix
, above) that encapsulates both an instance of
that parameter class and an instance of whatever the result of the query
is (BigDecimal
above).
The NodeInterestRates.Oracle
allows querying for multiple Fix
objects but that is not necessary and is
provided for the convenience of callers who need multiple fixes and want to be able to do it all in one query request.
Assuming you have a data source and can query it, it should be very easy to implement your query
method and the
parameter and CommandData
classes.
Let’s see how the sign
method for NodeInterestRates.Oracle
is written:
Here we can see that there are several steps:
- Ensure that the transaction we have been sent is indeed valid and passes verification, even though we cannot see all of it
- Check that we only received commands as expected, and each of those commands expects us to sign for them and is of
the expected type (
Fix
here) - Iterate over each of the commands we identified in the last step and check that the data they represent matches exactly our data source. The final step, assuming we have got this far, is to generate a signature for the transaction and return it
Binding to the network¶
注解
Before reading any further, we advise that you understand the concept of flows and how to write them and use them. See Writing flows. Likewise some understanding of Cordapps, plugins and services will be helpful. See Running nodes locally.
The first step is to create the oracle as a service by annotating its class with @CordaService
. Let’s see how that’s
done:
The Corda node scans for any class with this annotation and initialises them. The only requirement is that the class provide
a constructor with a single parameter of type ServiceHub
.
These two flows leverage the oracle to provide the querying and signing operations. They get reference to the oracle,
which will have already been initialised by the node, using ServiceHub.cordaService
. Both flows are annotated with
@InitiatedBy
. This tells the node which initiating flow (which are discussed in the next section) they are meant to
be executed with.
Providing sub-flows for querying and signing¶
We mentioned the client sub-flow briefly above. They are the mechanism that clients, in the form of other flows, will
use to interact with your oracle. Typically there will be one for querying and one for signing. Let’s take a look at
those for NodeInterestRates.Oracle
.
You’ll note that the FixSignFlow
requires a FilterTransaction
instance which includes only Fix
commands.
You can find a further explanation of this in Oracles. Below you will see how to build such a
transaction with hidden fields.
Using an oracle¶
The oracle is invoked through sub-flows to query for values, add them to the transaction as commands and then get
the transaction signed by the oracle. Following on from the above examples, this is all encapsulated in a sub-flow
called RatesFixFlow
. Here’s the call
method of that flow.
As you can see, this:
- Queries the oracle for the fact using the client sub-flow for querying defined above
- Does some quick validation
- Adds the command to the transaction containing the fact to be signed for by the oracle
- Calls an extension point that allows clients to generate output states based on the fact from the oracle
- Builds filtered transaction based on filtering function extended from
RatesFixFlow
- Requests the signature from the oracle using the client sub-flow for signing from above
Here’s an example of it in action from FixingFlow.Fixer
.
注解
When overriding be careful when making the sub-class an anonymous or inner class (object declarations in Kotlin), because that kind of classes can access variables from the enclosing scope and cause serialization problems when checkpointed.
Testing¶
The MockNetwork
allows the creation of MockNode
instances, which are simplified nodes which can be used for testing (see API: 测试).
When creating the MockNetwork
you supply a list of TestCordapp
objects which point to CorDapps on
the classpath. These CorDapps will be installed on each node on the network. Make sure the packages you provide reference to the CorDapp
containing your oracle service.
You can then write tests on your mock network to verify the nodes interact with your Oracle correctly.
See here for more examples.
Writing a custom notary service (experimental)¶
警告
Customising a notary service is still an experimental feature and not recommended for most use-cases. The APIs for writing a custom notary may change in the future.
The first step is to create a service class in your CorDapp that extends the NotaryService
abstract class.
This will ensure that it is recognised as a notary service.
The custom notary service class should provide a constructor with two parameters of types ServiceHubInternal
and PublicKey
.
Note that ServiceHubInternal
does not provide any API stability guarantees.
The next step is to write a notary service flow. You are free to copy and modify the existing built-in flows such
as ValidatingNotaryFlow
, NonValidatingNotaryFlow
, or implement your own from scratch (following the
NotaryFlow.Service
template). Below is an example of a custom flow for a validating notary service:
To enable the service, add the following to the node configuration:
notary : {
validating : true # Set to false if your service is non-validating
className : "net.corda.notarydemo.MyCustomValidatingNotaryService" # The fully qualified name of your service class
}
Transaction tear-offs¶
Suppose we want to construct a transaction that includes commands containing interest rate fix data as in
Writing oracle services. Before sending the transaction to the oracle to obtain its signature, we need to filter out every part
of the transaction except for the Fix
commands.
To do so, we need to create a filtering function that specifies which fields of the transaction should be included. Each field will only be included if the filtering function returns true when the field is passed in as input.
val filtering = Predicate<Any> {
when (it) {
is Command<*> -> oracle.owningKey in it.signers && it.value is Fix
else -> false
}
}
We can now use our filtering function to construct a FilteredTransaction
:
val ftx: FilteredTransaction = stx.buildFilteredTransaction(filtering)
In the Oracle example this step takes place in RatesFixFlow
by overriding the filtering
function. See
Using an oracle.
Both WireTransaction
and FilteredTransaction
inherit from TraversableTransaction
, so access to the
transaction components is exactly the same. Note that unlike WireTransaction
,
FilteredTransaction
only holds data that we wanted to reveal (after filtering).
// Direct access to included commands, inputs, outputs, attachments etc.
val cmds: List<Command<*>> = ftx.commands
val ins: List<StateRef> = ftx.inputs
val timeWindow: TimeWindow? = ftx.timeWindow
// ...
The following code snippet is taken from NodeInterestRates.kt
and implements a signing part of an Oracle.
注解
The way the FilteredTransaction
is constructed ensures that after signing of the root hash it’s impossible to add or remove
components (leaves). However, it can happen that having transaction with multiple commands one party reveals only subset of them to the Oracle.
As signing is done now over the Merkle root hash, the service signs all commands of given type, even though it didn’t see
all of them. In the case however where all of the commands should be visible to an Oracle, one can type ftx.checkAllComponentsVisible(COMMANDS_GROUP)
before invoking ftx.verify
.
checkAllComponentsVisible
is using a sophisticated underlying partial Merkle tree check to guarantee that all of
the components of a particular group that existed in the original WireTransaction
are included in the received
FilteredTransaction
.
Using attachments¶
Attachments are ZIP/JAR files referenced from transaction by hash, but not included in the transaction itself. These files are automatically requested from the node sending the transaction when needed and cached locally so they are not re-requested if encountered again. Attachments typically contain:
- Contract code
- Metadata about a transaction, such as PDF version of an invoice being settled
- Shared information to be permanently recorded on the ledger
To add attachments the file must first be uploaded to the node, which returns a unique ID that can be added
using TransactionBuilder.addAttachment()
. Attachments can be uploaded and downloaded via RPC and the Corda
Node shell.
It is encouraged that where possible attachments are reusable data, so that nodes can meaningfully cache them.
Uploading and downloading¶
To upload an attachment to the node, or download an attachment named by its hash, you use 与节点互动. This is also available for interactive use via the shell. To upload run:
>>> run uploadAttachment jar: path/to/the/file.jar
or
>>> run uploadAttachmentWithMetadata jar: path/to/the/file.jar, uploader: myself, filename: original_name.jar
to include the metadata with the attachment which can be used to find it later on. Note, that currently both uploader and filename are just plain strings (there is no connection between uploader and the RPC users for example).
The file is uploaded, checked and if successful the hash of the file is returned. This is how the attachment is identified inside the node.
To download an attachment, you can do:
>>> run openAttachment id: AB7FED7663A3F195A59A0F01091932B15C22405CB727A1518418BF53C6E6663A
which will then ask you to provide a path to save the file to. To do the same thing programmatically, you
can pass a simple InputStream
or SecureHash
to the uploadAttachment
/openAttachment
RPCs from
a JVM client.
Searching for attachments¶
Attachment metadata can be queried in a similar way to the vault (see API: Vault Query).
AttachmentQueryCriteria
can be used to build a query using the following set of column operations:
- Binary logical (AND, OR)
- Comparison (LESS_THAN, LESS_THAN_OR_EQUAL, GREATER_THAN, GREATER_THAN_OR_EQUAL)
- Equality (EQUAL, NOT_EQUAL)
- Likeness (LIKE, NOT_LIKE)
- Nullability (IS_NULL, NOT_NULL)
- Collection based (IN, NOT_IN)
The and
and or
operators can be used to build complex queries. For example:
rtEquals(
emptyList(),
storage.queryAttachments(
AttachmentsQueryCriteria(uploaderCondition = Builder.equal("complexA"))
.and(AttachmentsQueryCriteria(uploaderCondition = Builder.equal("complexB"))))
rtEquals(
listOf(hashA, hashB),
storage.queryAttachments(
AttachmentsQueryCriteria(uploaderCondition = Builder.equal("complexA"))
.or(AttachmentsQueryCriteria(uploaderCondition = Builder.equal("complexB"))))
complexCondition =
(uploaderCondition("complexB").and(filenamerCondition("archiveB.zip"))).or(filenamerCondition("archiveC.zip"))
Protocol¶
Normally attachments on transactions are fetched automatically via the ReceiveTransactionFlow
. Attachments
are needed in order to validate a transaction (they include, for example, the contract code), so must be fetched
before the validation process can run.
注解
Future versions of Corda may support non-critical attachments that are not used for transaction verification and which are shared explicitly. These are useful for attaching and signing auditing data with a transaction that isn’t used as part of the contract logic.
Attachments demo¶
There is a worked example of attachments, which relays a simple document from one node to another. The “two party trade flow” also includes an attachment, however it is a significantly more complex demo, and less well suited for a tutorial.
The demo code is in the file samples/attachment-demo/src/main/kotlin/net/corda/attachmentdemo/AttachmentDemo.kt
,
with the core logic contained within the two functions recipient()
and sender()
. The first thing it does is set
up an RPC connection to node B using a demo user account (this is all configured in the gradle build script for the demo
and the nodes will be created using the deployNodes
gradle task as normal). The CordaRPCClient.use
method is a
convenience helper intended for small tools that sets up an RPC connection scoped to the provided block, and brings all
the RPCs into scope. Once connected the sender/recipient functions are run with the RPC proxy as a parameter.
We’ll look at the recipient function first.
The first thing it does is wait to receive a notification of a new transaction by calling the verifiedTransactions
RPC, which returns both a snapshot and an observable of changes. The observable is made blocking and the next
transaction the node verifies is retrieved. That transaction is checked to see if it has the expected attachment
and if so, printed out.
The sender correspondingly builds a transaction with the attachment, then calls FinalityFlow
to complete the
transaction and send it to the recipient node:
This side is a bit more complex. Firstly it looks up its counterparty by name in the network map. Then, if the node
doesn’t already have the attachment in its storage, we upload it from a JAR resource and check the hash was what
we expected. Then a trivial transaction is built that has the attachment and a single signature and it’s sent to
the other side using the FinalityFlow. The result of starting the flow is a stream of progress messages and a
returnValue
observable that can be used to watch out for the flow completing successfully.
Event scheduling¶
This article explains our approach to modelling time based events in code. It explains how a contract state can expose an upcoming event and what action to take if the scheduled time for that event is reached.
Introduction¶
Many financial instruments have time sensitive components to them. For example, an Interest Rate Swap has a schedule for when:
- Interest rate fixings should take place for floating legs, so that the interest rate used as the basis for payments can be agreed.
- Any payments between the parties are expected to take place.
- Any payments between the parties become overdue.
Each of these is dependent on the current state of the financial instrument. What payments and interest rate fixings have already happened should already be recorded in the state, for example. This means that the next time sensitive event is thus a property of the current contract state. By next, we mean earliest in chronological terms, that is still due. If a contract state is consumed in the UTXO model, then what was the next event becomes irrelevant and obsolete and the next time sensitive event is determined by any successor contract state.
Knowing when the next time sensitive event is due to occur is useful, but typically some activity is expected to take place when this event occurs. We already have a model for business processes in the form of flows, so in the platform we have introduced the concept of scheduled activities that can invoke flow state machines at a scheduled time. A contract state can optionally described the next scheduled activity for itself. If it omits to do so, then nothing will be scheduled.
How to implement scheduled events¶
There are two main steps to implementing scheduled events:
- Have your
ContractState
implementation also implementSchedulableState
. This requires a method namednextScheduledActivity
to be implemented which returns an optionalScheduledActivity
instance.ScheduledActivity
captures whatFlowLogic
instance each node will run, to perform the activity, and when it will run is described by ajava.time.Instant
. Once your state implements this interface and is tracked by the vault, it can expect to be queried for the next activity when committed to the vault. TheFlowLogic
must be annotated with@SchedulableFlow
. - If nothing suitable exists, implement a
FlowLogic
to be executed by each node as the activity itself. The important thing to remember is that in the current implementation, each node that is party to the transaction will execute the sameFlowLogic
, so it needs to establish roles in the business process based on the contract state and the node it is running on. Each side will follow different but complementary paths through the business logic.
注解
The scheduler’s clock always operates in the UTC time zone for uniformity, so any time zone logic must be
performed by the contract, using ZonedDateTime
.
The production and consumption of ContractStates
is observed by the scheduler and the activities associated with
any consumed states are unscheduled. Any newly produced states are then queried via the nextScheduledActivity
method and if they do not return null
then that activity is scheduled based on the content of the
ScheduledActivity
object returned. Be aware that this only happens if the vault considers the state
“relevant”, for instance, because the owner of the node also owns that state. States that your node happens to
encounter but which aren’t related to yourself will not have any activities scheduled.
An example¶
Let’s take an example of the interest rate swap fixings for our scheduled events. The first task is to implement the
nextScheduledActivity
method on the State
.
The first thing this does is establish if there are any remaining fixings. If there are none, then it returns null
to indicate that there is no activity to schedule. Otherwise it calculates the Instant
at which the interest rate
should become available and schedules an activity at that time to work out what roles each node will take in the fixing
business process and to take on those roles. That FlowLogic
will be handed the StateRef
for the interest
rate swap State
in question, as well as a tolerance Duration
of how long to wait after the activity is triggered
for the interest rate before indicating an error.
Observer nodes¶
Posting transactions to an observer node is a common requirement in finance, where regulators often want to receive comprehensive reporting on all actions taken. By running their own node, regulators can receive a stream of digitally signed, de-duplicated reports useful for later processing.
Adding support for observer nodes to your application is easy. The IRS (interest rate swap) demo shows to do it.
Just define a new flow that wraps the SendTransactionFlow/ReceiveTransactionFlow, as follows:
In this example, the AutoOfferFlow
is the business logic, and we define two very short and simple flows to send
the transaction to the regulator. There are two important aspects to note here:
- The
ReportToRegulatorFlow
is marked as an@InitiatingFlow
because it will start a new conversation, context free, with the regulator. - The
ReceiveRegulatoryReportFlow
usesReceiveTransactionFlow
in a special way - it tells it to send the transaction to the vault for processing, including all states even if not involving our public keys. This is required because otherwise the vault will ignore states that don’t list any of the node’s public keys, but in this case, we do want to passively observe states we can’t change. So overriding this behaviour is required.
If the states define a relational mapping (see API: 持久化) then the regulator will be able to query the reports from their database and observe new transactions coming in via RPC.
Caveats¶
- By default, vault queries do not differentiate between states you recorded as a participant/owner, and states you
recorded as an observer. You will have to write custom vault queries that only return states for which you are a
participant/owner. See https://docs.corda.net/api-vault-query.html#example-usage for information on how to do this.
This also means that
Cash.generateSpend
should not be used when recordingCash.State
states as an observer - Nodes only record each transaction once. If a node has already recorded a transaction in non-observer mode, it cannot later re-record the same transaction as an observer. This issue is tracked here: https://r3-cev.atlassian.net/browse/CORDA-883
- When an observer node is sent a transaction with the ALL_VISIBLE flag set, any transactions in the transaction history that have not already been received will also have ALL_VISIBLE states recorded. This mean a node that is both an observer and a participant may have some transactions with all states recorded and some with only relevant states recorded, even if those transactions are part of the same chain. As a result, there may be more states present in the vault than would be expected if just those transactions sent with the ALL_VISIBLE recording flag were processed in this way.
Tools¶
Corda provides various command line and GUI tools to help you as you work. Along with the three below, you may also wish to try the Blob 查看器.
Corda Network Builder¶
目录
The Corda Network Builder is a tool for building Corda networks for testing purposes. It leverages Docker and containers to abstract the complexity of managing a distributed network away from the user.

The network you build will either be made up of local docker
nodes or of nodes spread across Azure
containers. More backends may be added in future. The tool is open source, so contributions to add more
destinations for the containers are welcome!
Download the Corda Network Builder.
Prerequisites¶
- Docker: docker > 17.12.0-ce
- Azure: authenticated az-cli >= 2.0 (see: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
Creating the base nodes¶
The network builder uses a set of nodes as the base for all other operations. A node is anything that satisfies the following layout:
-
-- node.conf
-- corda.jar
-- cordapps/
An easy way to build a valid set of nodes is by running deployNodes
. In this document, we will be using
the output of running deployNodes
for the Example CorDapp:
git clone https://github.com/corda/cordapp-example
cd cordapp-example
./gradlew clean deployNodes
Building a network via the command line¶
Starting the nodes¶
Quickstart Local Docker¶
cd kotlin-source/build/nodes
java -jar <path/to/network-builder-jar> -d .
If you run docker ps
to see the running containers, the following output should be displayed:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
406868b4ba69 node-partyc:corda-network "/run-corda.sh" 17 seconds ago Up 16 seconds 0.0.0.0:32902->10003/tcp, 0.0.0.0:32895->10005/tcp, 0.0.0.0:32898->10020/tcp, 0.0.0.0:32900->12222/tcp partyc0
4546a2fa8de7 node-partyb:corda-network "/run-corda.sh" 17 seconds ago Up 17 seconds 0.0.0.0:32896->10003/tcp, 0.0.0.0:32899->10005/tcp, 0.0.0.0:32901->10020/tcp, 0.0.0.0:32903->12222/tcp partyb0
c8c44c515bdb node-partya:corda-network "/run-corda.sh" 17 seconds ago Up 17 seconds 0.0.0.0:32894->10003/tcp, 0.0.0.0:32897->10005/tcp, 0.0.0.0:32892->10020/tcp, 0.0.0.0:32893->12222/tcp partya0
cf7ab689f493 node-notary:corda-network "/run-corda.sh" 30 seconds ago Up 31 seconds 0.0.0.0:32888->10003/tcp, 0.0.0.0:32889->10005/tcp, 0.0.0.0:32890->10020/tcp, 0.0.0.0:32891->12222/tcp notary0
Quickstart Remote Azure¶
cd kotlin-source/build/nodes
java -jar <path/to/network-builder-jar> -b AZURE -d .
注解
The Azure configuration is handled by the az-cli utility. See the Prerequisites.
Interacting with the nodes¶
You can interact with the nodes by SSHing into them on the port that is mapped to 12222. For example, to SSH into the
partya0
node, you would run:
ssh user1@localhost -p 32893
Password authentication
Password:
Welcome to the Corda interactive shell.
Useful commands include 'help' to see what is available, and 'bye' to shut down the node.
>>> run networkMapSnapshot
[
{ "addresses" : [ "partya0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyA, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701330613 },
{ "addresses" : [ "notary0:10020" ], "legalIdentitiesAndCerts" : [ "O=Notary, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701305115 },
{ "addresses" : [ "partyc0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyC, L=Paris, C=FR" ], "platformVersion" : 3, "serial" : 1532701331608 },
{ "addresses" : [ "partyb0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyB, L=New York, C=US" ], "platformVersion" : 3, "serial" : 1532701330118 }
]
>>>
Adding additional nodes¶
It is possible to add additional nodes to the network by reusing the nodes you built earlier. For example, to add a
node by reusing the existing PartyA
node, you would run:
java -jar <network-builder-jar> --add "PartyA=O=PartyZ,L=London,C=GB"
To confirm the node has been started correctly, run the following in the previously connected SSH session:
Tue Jul 17 15:47:14 GMT 2018>>> run networkMapSnapshot
[
{ "addresses" : [ "partya0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyA, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701330613 },
{ "addresses" : [ "notary0:10020" ], "legalIdentitiesAndCerts" : [ "O=Notary, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701305115 },
{ "addresses" : [ "partyc0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyC, L=Paris, C=FR" ], "platformVersion" : 3, "serial" : 1532701331608 },
{ "addresses" : [ "partyb0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyB, L=New York, C=US" ], "platformVersion" : 3, "serial" : 1532701330118 },
{ "addresses" : [ "partya1:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyZ, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701630861 }
]
Building a network in Graphical User Mode¶
The Corda Network Builder also provides a GUI for when automated interactions are not required. To launch it, run
java -jar <path/to/network-builder-jar> -g
.
Starting the nodes¶
- Click
Open nodes ...
and select the folder where you built your nodes in Creating the base nodes and clickOpen
- Select
Local Docker
orAzure
- Click
Build
注解
The Azure configuration is handled by the az-cli utility. See the Prerequisites.
All the nodes should eventually move to a Status
of INSTANTIATED
. If you run docker ps
from the terminal to
see the running containers, the following output should be displayed:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
406868b4ba69 node-partyc:corda-network "/run-corda.sh" 17 seconds ago Up 16 seconds 0.0.0.0:32902->10003/tcp, 0.0.0.0:32895->10005/tcp, 0.0.0.0:32898->10020/tcp, 0.0.0.0:32900->12222/tcp partyc0
4546a2fa8de7 node-partyb:corda-network "/run-corda.sh" 17 seconds ago Up 17 seconds 0.0.0.0:32896->10003/tcp, 0.0.0.0:32899->10005/tcp, 0.0.0.0:32901->10020/tcp, 0.0.0.0:32903->12222/tcp partyb0
c8c44c515bdb node-partya:corda-network "/run-corda.sh" 17 seconds ago Up 17 seconds 0.0.0.0:32894->10003/tcp, 0.0.0.0:32897->10005/tcp, 0.0.0.0:32892->10020/tcp, 0.0.0.0:32893->12222/tcp partya0
cf7ab689f493 node-notary:corda-network "/run-corda.sh" 30 seconds ago Up 31 seconds 0.0.0.0:32888->10003/tcp, 0.0.0.0:32889->10005/tcp, 0.0.0.0:32890->10020/tcp, 0.0.0.0:32891->12222/tcp notary0
Adding additional nodes¶
It is possible to add additional nodes to the network by reusing the nodes you built earlier. For example, to add a
node by reusing the existing PartyA
node, you would:
- Select
partya
in the dropdown - Click
Add Instance
- Specify the new node’s X500 name and click
OK
If you click on partya
in the pane, you should see an additional instance listed in the sidebar. To confirm the
node has been started correctly, run the following in the previously connected SSH session:
Tue Jul 17 15:47:14 GMT 2018>>> run networkMapSnapshot
[
{ "addresses" : [ "partya0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyA, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701330613 },
{ "addresses" : [ "notary0:10020" ], "legalIdentitiesAndCerts" : [ "O=Notary, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701305115 },
{ "addresses" : [ "partyc0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyC, L=Paris, C=FR" ], "platformVersion" : 3, "serial" : 1532701331608 },
{ "addresses" : [ "partyb0:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyB, L=New York, C=US" ], "platformVersion" : 3, "serial" : 1532701330118 },
{ "addresses" : [ "partya1:10020" ], "legalIdentitiesAndCerts" : [ "O=PartyZ, L=London, C=GB" ], "platformVersion" : 3, "serial" : 1532701630861 }
]
Shutting down the nodes¶
Run docker kill $(docker ps -q)
to kill all running Docker processes.
Network Bootstrapper¶
Test deployments¶
Nodes within a network see each other using the network map. This is a collection of statically signed node-info files, one for each node. Most production deployments will use a highly available, secure distribution of the network map via HTTP.
For test deployments where the nodes (at least initially) reside on the same filesystem, these node-info files can be
placed directly in the node’s additional-node-infos
directory from where the node will pick them up and store them
in its local network map cache. The node generates its own node-info file on startup.
In addition to the network map, all the nodes must also use the same set of network parameters. These are a set of constants which guarantee interoperability between the nodes. The HTTP network map distributes the network parameters which are downloaded automatically by the nodes. In the absence of this the network parameters must be generated locally.
For these reasons, test deployments can avail themselves of the Network Bootstrapper. This is a tool that scans all the node configurations from a common directory to generate the network parameters file, which is then copied to all the nodes’ directories. It also copies each node’s node-info file to every other node so that they can all be visible to each other.
You can find out more about network maps and network parameters from 网络地图.
Bootstrapping a test network¶
The Corda Network Bootstrapper can be downloaded from here.
Create a directory containing a node config file, ending in “_node.conf”, for each node you want to create. “devMode” must be set to true. Then run the following command:
java -jar corda-tools-network-bootstrapper-4.1-RC01.jar --dir <nodes-root-dir>
For example running the command on a directory containing these files:
.
├── notary_node.conf // The notary's node.conf file
├── partya_node.conf // Party A's node.conf file
└── partyb_node.conf // Party B's node.conf file
will generate directories containing three nodes: notary
, partya
and partyb
. They will each use the corda.jar
that comes with the Network Bootstrapper. If a different version of Corda is required then simply place that corda.jar
file
alongside the configuration files in the directory.
You can also have the node directories containing their “node.conf” files already laid out. The previous example would be:
.
├── notary
│ └── node.conf
├── partya
│ └── node.conf
└── partyb
└── node.conf
Similarly, each node directory may contain its own corda.jar
, which the Bootstrapper will use instead.
Providing CorDapps to the Network Bootstrapper¶
If you would like the Network Bootstrapper to include your CorDapps in each generated node, just place them in the directory alongside the config files. For example, if your directory has this structure:
.
├── notary_node.conf // The notary's node.conf file
├── partya_node.conf // Party A's node.conf file
├── partyb_node.conf // Party B's node.conf file
├── cordapp-a.jar // A cordapp to be installed on all nodes
└── cordapp-b.jar // Another cordapp to be installed on all nodes
The cordapp-a.jar
and cordapp-b.jar
will be installed in each node directory, and any contracts within them will be
added to the Contract Whitelist (see below).
Whitelisting contracts¶
Any CorDapps provided when bootstrapping a network will be scanned for contracts which will be used to create the Zone whitelist (see API: 合约约束) for the network.
注解
If you only wish to whitelist the CorDapps but not copy them to each node then run with the --copy-cordapps=No
option.
The CorDapp JARs will be hashed and scanned for Contract
classes. These contract class implementations will become part
of the whitelisted contracts in the network parameters (see NetworkParameters.whitelistedContractImplementations
网络地图).
By default the Bootstrapper will whitelist all the contracts found in the unsigned CorDapp JARs (a JAR file not signed by jarSigner tool).
Whitelisted contracts are checked by Zone constraints, while contract classes from signed JARs will be checked by Signature constraints.
To prevent certain contracts from unsigned JARs from being whitelisted, add their fully qualified class name in the exclude_whitelist.txt
.
These will instead use the more restrictive HashAttachmentConstraint
.
To add certain contracts from signed JARs to whitelist, add their fully qualified class name in the include_whitelist.txt
.
Refer to API: 合约约束 to understand the implication of different constraint types before adding exclude_whitelist.txt
or include_whitelist.txt
files.
For example:
net.corda.finance.contracts.asset.Cash
net.corda.finance.contracts.asset.CommercialPaper
Modifying a bootstrapped network¶
The Network Bootstrapper is provided as a development tool for setting up Corda networks for development and testing. There is some limited functionality which can be used to make changes to a network, but for anything more complicated consider using a Network Map server.
When running the Network Bootstrapper, each node-info
file needs to be gathered together in one directory. If
the nodes are being run on different machines you need to do the following:
- Copy the node directories from each machine into one directory, on one machine
- Depending on the modification being made (see below for more information), add any new files required to the root directory
- Run the Network Bootstrapper from the root directory
- Copy each individual node’s directory back to the original machine
The Network Bootstrapper cannot dynamically update the network if an existing node has changed something in their node-info,
e.g. their P2P address. For this the new node-info file will need to be placed in the other nodes’ additional-node-infos
directory.
If the nodes are located on different machines, then a utility such as rsync can be used
so that the nodes can share node-infos.
Adding a new node to the network¶
Running the Bootstrapper again on the same network will allow a new node to be added and its node-info distributed to the existing nodes.
As an example, if we have an existing bootstrapped network, with a Notary and PartyA and we want to add a PartyB, we can use the Network Bootstrapper on the following network structure:
.
├── notary // existing node directories
│ ├── node.conf
│ ├── network-parameters
│ ├── node-info-notary
│ └── additional-node-infos
│ ├── node-info-notary
│ └── node-info-partya
├── partya
│ ├── node.conf
│ ├── network-parameters
│ ├── node-info-partya
│ └── additional-node-infos
│ ├── node-info-notary
│ └── node-info-partya
└── partyb_node.conf // the node.conf for the node to be added
Then run the Network Bootstrapper again from the root dir:
java -jar corda-tools-network-bootstrapper-4.1-RC01.jar --dir <nodes-root-dir>
Which will give the following:
.
├── notary // the contents of the existing nodes (keys, db's etc...) are unchanged
│ ├── node.conf
│ ├── network-parameters
│ ├── node-info-notary
│ └── additional-node-infos
│ ├── node-info-notary
│ ├── node-info-partya
│ └── node-info-partyb
├── partya
│ ├── node.conf
│ ├── network-parameters
│ ├── node-info-partya
│ └── additional-node-infos
│ ├── node-info-notary
│ ├── node-info-partya
│ └── node-info-partyb
└── partyb // a new node directory is created for PartyB
├── node.conf
├── network-parameters
├── node-info-partyb
└── additional-node-infos
├── node-info-notary
├── node-info-partya
└── node-info-partyb
The Bootstrapper will generate a directory and the node-info
file for PartyB, and will also make sure a copy of each
nodes’ node-info
file is in the additional-node-info
directory of every node. Any other files in the existing nodes,
such a generated keys, will be unaffected.
注解
The Network Bootstrapper is provided for test deployments and can only generate information for nodes collected on the same machine. If a network needs to be updated using the Bootstrapper once deployed, the nodes will need collecting back together.
Updating the contract whitelist for bootstrapped networks¶
If the network already has a set of network parameters defined (i.e. the node directories all contain the same network-parameters file) then the Network Bootstrapper can be used to append contracts from new CorDapps to the current whitelist. For example, with the following pre-generated network:
.
├── notary
│ ├── node.conf
│ ├── network-parameters
│ └── cordapps
│ └── cordapp-a.jar
├── partya
│ ├── node.conf
│ ├── network-parameters
│ └── cordapps
│ └── cordapp-a.jar
├── partyb
│ ├── node.conf
│ ├── network-parameters
│ └── cordapps
│ └── cordapp-a.jar
└── cordapp-b.jar // The new cordapp to add to the existing nodes
Then run the Network Bootstrapper again from the root dir:
java -jar corda-tools-network-bootstrapper-4.1-RC01.jar --dir <nodes-root-dir>
To give the following:
.
├── notary
│ ├── node.conf
│ ├── network-parameters // The contracts from cordapp-b are appended to the whitelist in network-parameters
│ └── cordapps
│ ├── cordapp-a.jar
│ └── cordapp-b.jar // The updated cordapp is placed in the nodes cordapp directory
├── partya
│ ├── node.conf
│ ├── network-parameters // The contracts from cordapp-b are appended to the whitelist in network-parameters
│ └── cordapps
│ ├── cordapp-a.jar
│ └── cordapp-b.jar // The updated cordapp is placed in the nodes cordapp directory
└── partyb
├── node.conf
├── network-parameters // The contracts from cordapp-b are appended to the whitelist in network-parameters
└── cordapps
├── cordapp-a.jar
└── cordapp-b.jar // The updated cordapp is placed in the nodes cordapp directory
注解
The whitelist can only ever be appended to. Once added a contract implementation can never be removed.
Modifying the network parameters¶
The Network Bootstrapper creates a network parameters file when bootstrapping a network, using a set of sensible defaults. However, if you would like to override these defaults when testing, there are two ways of doing this. Options can be overridden via the command line or by supplying a configuration file. If the same parameter is overridden both by a command line argument and in the configuration file, the command line value will take precedence.
Overriding network parameters via command line¶
The --minimum-platform-version
, --max-message-size
, --max-transaction-size
and --event-horizon
command line parameters can
be used to override the default network parameters. See Command line options for more information.
Overriding network parameters via a file¶
You can provide a network parameters overrides file using the following syntax:
java -jar corda-tools-network-bootstrapper-4.1-RC01.jar --network-parameter-overrides=<path_to_file>
Or alternatively, by using the short form version:
java -jar corda-tools-network-bootstrapper-4.1-RC01.jar -n=<path_to_file>
The network parameter overrides file is a HOCON file with the following fields, all of which are optional. Any field that is not provided will be ignored. If a field is not provided and you are bootstrapping a new network, a sensible default value will be used. If a field is not provided and you are updating an existing network, the value in the existing network parameters file will be used.
注解
All fields can be used with placeholders for environment variables. For example: ${KEY_STORE_PASSWORD}
would be replaced by the contents of environment
variable KEY_STORE_PASSWORD
. See: corda-configuration-hiding-sensitive-data .
The available configuration fields are listed below:
minimumPlatformVersion: | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
The minimum supported version of the Corda platform that is required for nodes in the network. |
|||||||||||
maxMessageSize: | The maximum permitted message size, in bytes. This is currently ignored but will be used in a future release. |
||||||||||
maxTransactionSize: | |||||||||||
The maximum permitted transaction size, in bytes. |
|||||||||||
eventHorizon: | The time after which nodes will be removed from the network map if they have not been seen during this period. This parameter uses
the |
||||||||||
packageOwnership: | |||||||||||
A list of package owners. See Package namespace ownership for more information. For each package owner, the following fields are required:
|
An example configuration file:
minimumPlatformVersion=4
maxMessageSize=10485760
maxTransactionSize=524288000
eventHorizon="30 days"
packageOwnership=[
{
packageName="com.example"
keystore="myteststore"
keystorePassword="MyStorePassword"
keystoreAlias="MyKeyAlias"
}
]
Package namespace ownership¶
Package namespace ownership is a Corda security feature that allows a compatibility zone to give ownership of parts of the Java package namespace to registered users (e.g. a CorDapp development organisation). The exact mechanism used to claim a namespace is up to the zone operator. A typical approach would be to accept an SSL certificate with the domain in it as proof of domain ownership, or to accept an email from that domain.
注解
Read more about Package ownership here.
A Java package namespace is case insensitive and cannot be a sub-package of an existing registered namespace. See Naming a Package and Naming Conventions for guidelines on naming conventions.
The registration of a Java package namespace requires the creation of a signed certificate as generated by the Java keytool.
The packages can be registered by supplying a network parameters override config file via the command line, using the --network-parameter-overrides
command.
For each package to be registered, the following are required:
packageName: | Java package name (e.g com.my_company ). |
---|---|
keystore: | The path of the keystore file containing the signed certificate. If a relative path is provided, it is assumed to be relative to the location of the configuration file. |
keystorePassword: | |
The password for the given keystore (not to be confused with the key password). | |
keystoreAlias: | The alias for the name associated with the certificate to be associated with the package namespace. |
Using the Example CorDapp as an example, we will initialise a simple network and then register and unregister a package namespace. Checkout the Example CorDapp and follow the instructions to build it here.
注解
You can point to any existing bootstrapped corda network (this will have the effect of updating the associated network parameters file).
Create a new public key to use for signing the Java package namespace we wish to register:
$JAVA_HOME/bin/keytool -genkeypair -keystore _teststore -storepass MyStorePassword -keyalg RSA -alias MyKeyAlias -keypass MyKeyPassword -dname "O=Alice Corp, L=Madrid, C=ES"
This will generate a key store file called
_teststore
in the current directory.Create a
network-parameters.conf
file in the same directory, with the following information:packageOwnership=[ { packageName="com.example" keystore="_teststore" keystorePassword="MyStorePassword" keystoreAlias="MyKeyAlias" } ]
Register the package namespace to be claimed by the public key generated above:
# Register the Java package namespace using the Network Bootstrapper java -jar network-bootstrapper.jar --dir build/nodes --network-parameter-overrides=network-parameters.conf
To unregister the package namespace, edit the
network-parameters.conf
file to remove the package:packageOwnership=[]
Unregister the package namespace:
# Unregister the Java package namespace using the Network Bootstrapper java -jar network-bootstrapper.jar --dir build/nodes --network-parameter-overrides=network-parameters.conf
Command line options¶
The Network Bootstrapper can be started with the following command line options:
bootstrapper [-hvV] [--copy-cordapps=<copyCordapps>] [--dir=<dir>]
[--event-horizon=<eventHorizon>] [--logging-level=<loggingLevel>]
[--max-message-size=<maxMessageSize>]
[--max-transaction-size=<maxTransactionSize>]
[--minimum-platform-version=<minimumPlatformVersion>]
[-n=<networkParametersFile>] [COMMAND]
--dir=<dir>
: Root directory containing the node configuration files and CorDapp JARs that will form the test network. It may also contain existing node directories. Defaults to the current directory.--copy-cordapps=<copyCordapps>
: Whether or not to copy the CorDapp JARs into the nodes’ ‘cordapps’ directory. Possible values: FirstRunOnly, Yes, No. Default: FirstRunOnly.--verbose
,--log-to-console
,-v
: If set, prints logging to the console as well as to a file.--logging-level=<loggingLevel>
: Enable logging at this level and higher. Possible values: ERROR, WARN, INFO, DEBUG, TRACE. Default: INFO.--help
,-h
: Show this help message and exit.--version
,-V
: Print version information and exit.--minimum-platform-version
: The minimum platform version to use in the network-parameters.--max-message-size
: The maximum message size to use in the network-parameters, in bytes.--max-transaction-size
: The maximum transaction size to use in the network-parameters, in bytes.--event-horizon
: The event horizon to use in the network-parameters.--network-parameter-overrides=<networkParametersFile>
,-n=<networkParametersFile>
: Overrides the default network parameters with those in the given file. See Overriding network parameters via a file for more information.
Sub-commands¶
install-shell-extensions
: Install bootstrapper
alias and auto completion for bash and zsh. See Shell extensions for CLI Applications for more info.
DemoBench¶
DemoBench is a standalone desktop application that makes it easy to configure and launch local Corda nodes. It is useful for training sessions, demos or just experimentation.
Downloading¶
Installers compatible with the latest Corda release can be downloaded from the Corda website.
Running DemoBench¶
- Configuring a Node
Each node must have a unique name to identify it to the network map service. DemoBench will suggest node names, nearest cities and local port numbers to use.
The first node will be a notary. Hence only notary services will be available to be selected in the
Services
list. For subsequent nodes you may also select any of Corda’s other built-in services.Press the
Start node
button to launch the Corda node with your configuration.- Running Nodes
- DemoBench launches each new node in a terminal emulator. The
View Database
,Launch Explorer
andLaunch Web Server
buttons will all be disabled until the node has finished booting. DemoBench will then display simple statistics about the node such as its cash balance.
It is currently impossible from DemoBench to restart a node that has terminated, e.g. because the user typed “bye” at the node’s shell prompt. However, that node’s data and logs still remain in its directory.
- Exiting DemoBench
- When you terminate DemoBench, it will automatically shut down any nodes and explorers that it has launched and then exit.
- Profiles
- You can save the configurations and CorDapps for all of DemoBench’s currently running nodes into a profile, which is a
ZIP
file with the following layout, e.g.:
notary/
node.conf
cordapps/
banka/
node.conf
cordapps/
bankb/
node.conf
cordapps/
example-cordapp.jar
...
When DemoBench reloads this profile it will close any nodes that it is currently running and then launch these new nodes instead. All nodes will be created with a brand new database. Note that thenode.conf
files within each profile are JSON/HOCON format, and so can be extracted and edited as required.
DemoBench writes a log file to the following location:
MacOSX/Linux | $HOME/demobench/demobench.log |
Windows | %USERPROFILE%\demobench\demobench.log |
Building the Installers¶
Gradle defines tasks that build DemoBench installers using JavaPackager. There are three scripts in the tools/demobench directory of the Corda repository to execute these tasks:
package-demobench-exe.bat
(Windows)package-demobench-dmg.sh
(MacOS)package-demobench-rpm.sh
(Fedora/Linux)
Each script can only be run on its target platform, and each expects the platform’s installation tools already to be available.
- Windows: Inno Setup 5+
- MacOS: The packaging tools should be available automatically. The DMG contents will also be signed if the packager finds a valid
Developer ID Application
certificate with a private key on the keyring. (By default, DemoBench’sbuild.gradle
expects the signing key’s user name to be “R3CEV”.) You can create such a certificate by generating a Certificate Signing Request and then asking your local “Apple team agent” to upload it to the Apple Developer portal. (See here.)
注解
- Please ensure that the
/usr/bin/codesign
application always has access to your certificate’s signing key. You may need to reboot your Mac after making any changes via the MacOS Keychain Access application. - You should use JDK >= 8u152 to build DemoBench on MacOS because this version resolves a warning message that is printed to the terminal when starting each Corda node.
- Ideally, use the JetBrains JDK to build the DMG.
- Fedora/Linux:
rpm-build
packages.
You will also need to define the environment variable JAVA_HOME
to point to the same JDK that you use to run Gradle. The installer will be written to the tools/demobench/build/javapackage/bundles
directory, and can be installed like any other application for your platform.
JetBrains JDK¶
Mac users should note that the best way to build a DemoBench DMG is with the JetBrains JDK which has binary downloads available from BinTray. This JDK has some useful GUI fixes, most notably, when built with this JDK the DemoBench terminal will support emoji and as such, the nicer coloured ANSI progress renderer. It also resolves some issues with HiDPI rendering on Windows.
This JDK does not include JavaPackager, which means that you will still need to copy $JAVA_HOME/lib/ant-javafx.jar
from an Oracle JDK into the corresponding directory within your JetBrains JDK.
Developer Notes¶
Developers wishing to run DemoBench without building a new installer each time can install it locally using Gradle:
$ gradlew tools:demobench:installDist
$ cd tools/demobench/build/install/demobench
$ bin/demobench
Unfortunately, DemoBench’s $CLASSPATH
may be too long for the Windows shell . In which case you can still run DemoBench as follows:
> java -Djava.util.logging.config.class=net.corda.demobench.config.LoggingConfig -jar lib/demobench-$version.jar
While DemoBench can be executed within an IDE, it would be up to the Developer to install all of its runtime
dependencies beforehand into their correct locations relative to the value of the user.dir
system property (i.e. the
current working directory of the JVM):
corda/
corda.jar
corda-webserver.jar
explorer/
node-explorer.jar
cordapps/
bank-of-corda.jar
Node Explorer¶
注解
To run Node Explorer on your machine, you will need JavaFX for Java 8. If you don’t have JavaFX installed, you can either download and build your own version of OpenJFK, or use a pre-existing build, like the one offered by Zulu. They have community builds of OpenJFX for Window, macOS and Linux available on their website.
The node explorer provides views into a node’s vault and transaction data using Corda’s RPC framework. The user can execute cash transaction commands to issue and move cash to other parties on the network or exit cash (eg. remove from the ledger)
Running the UI¶
Windows:
gradlew.bat tools:explorer:run
Other:
./gradlew tools:explorer:run
注解
In order to connect to a given node, the node explorer must have access to all CorDapps loaded on that particular node.
By default, it only has access to the finance CorDapp.
All other CorDapps present on the node must be copied to a cordapps
directory located within the directory from which the node explorer is run.
Running demo nodes¶
Node Explorer is included with the DemoBench application, which allows you to create local Corda networks on your desktop. For example:
- Notary
- Bank of Breakfast Tea (Issuer node for GBP)
- Bank of Big Apples (Issuer node for USD)
- Alice (Participant node, for user Alice)
- Bob (Participant node, for user Bob)
DemoBench will deploy all nodes with Corda’s Finance CorDapp automatically, and allow you to launch an instance of Node Explorer for each. You will also be logged into the Node Explorer automatically.
When connected to an Issuer node, a user can execute cash transaction commands to issue and move cash to itself or other parties on the network or to exit cash (for itself only).
When connected to a Participant node a user can only execute cash transaction commands to move cash to other parties on the network.
The Node Explorer is also available as a stand-alone JavaFX application. It is
available from the Corda repositories as corda-tools-explorer
, and can be
run as
java -jar corda-tools-explorer.jar
注解
Use the Explorer in conjunction with the Trader Demo and Bank of Corda samples to use other Issuer nodes.
Interface¶
- Login
- User can login to any Corda node using the explorer.
Corda node address, username and password are required for login, the address is defaulted to localhost:0 if left blank.
Username and password can be configured via the
rpcUsers
field in node’s configuration file.

- Dashboard
- The dashboard shows the top level state of node and vault. Currently, it shows your cash balance and the numbers of transaction executed. The dashboard is intended to house widgets from different CordApps and provide useful information to system admin at a glance.

- Cash
- The cash view shows all currencies you currently own in a tree table format, it is grouped by issuer -> currency. Individual cash transactions can be viewed by clicking on the table row. The user can also use the search field to narrow down the scope.

- New Transactions
This is where you can create new cash transactions. The user can choose from three transaction types (issue, pay and exit) and any party visible on the network.
General nodes can only execute pay commands to any other party on the network.

- Issuer Nodes
- Issuer nodes can execute issue (to itself or to any other party), pay and exit transactions. The result of the transaction will be visible in the transaction screen when executed.

- Transactions
- The transaction view contains all transactions handled by the node in a table view. It shows basic information on the table e.g. Transaction ID, command type, USD equivalence value etc. User can expand the row by double clicking to view the inputs, outputs and the signatures details for that transaction.

- Network
- The network view shows the network information on the world map. Currently only the user’s node is rendered on the map. This will be extended to other peers in a future release. The map provides an intuitive way of visualizing the Corda network and the participants.

- Settings
- User can configure the client preference in this view.
注解
Although the reporting currency is configurable, FX conversion won’t be applied to the values as we don’t have an FX service yet.

节点内部¶
节点服务¶
This document is intended as a very brief introduction to the current service components inside the node. Whilst not at all exhaustive it is hoped that this will give some context when writing applications and code that use these services, or which are operated upon by the internal components of Corda.
这篇文章目的是为了非常简单地介绍一下节点内提供的当前版本的服务。这里讲的并不会很详细,我们希望这些内容在你在开发使用这些服务的应用或者代码的时候,能够给你提供更多的上下文信息。
节点内的服务¶
The node services represent the various sub functions of the Corda node.
Some are directly accessible to contracts and flows through the
ServiceHub
, whilst others are the framework internals used to host
the node functions. Any public service interfaces are defined in the
net.corda.core.node.services
package. The ServiceHub
interface exposes
functionality suitable for flows.
The implementation code for all standard services lives in the net.corda.node.services
package.
节点的服务体现了一个 Corda 节点所具有的多种子功能(sub functions)。一些服务能够通过 ServiceHub
直接访问 contracts 和 flows,但是还有一些服务是框架内部使用的。任何的公共服务接口会定义在 net.corda.core.node.services
包中。ServiceHub
接口暴露了适用于 flows 的一些功能。所有标准服务的实现都在 net.corda.node.services
包中定义的。
All the services are constructed in the AbstractNode
start
method. They may also register a shutdown handler during initialisation,
which will be called in reverse order to the start registration sequence when the Node.stop
is called.
所有的服务都是在 AbstractNode
start
方法中被构建的。他们可能在初始化的时候还会注册一个关闭处理,当 Node.stop
被调用的时候会被调用。
The roles of the individual services are described below.
每一个服务的角色描述如下。
秘钥管理和身份服务¶
InMemoryIdentityService¶
The InMemoryIdentityService
implements the IdentityService
interface and provides a store of remote mappings between PublicKey
and remote Parties
. It is automatically populated from the
NetworkMapCache
updates and is used when translating PublicKey
exposed in transactions into fully populated Party
identities. This
service is also used in the default JSON mapping of parties in the web
server, thus allowing the party names to be used to refer to other nodes’
legal identities. In the future the Identity service will be made
persistent and extended to allow anonymised session keys to be used in
flows where the well-known PublicKey
of nodes need to be hidden
to non-involved parties.
InMemoryIdentityService
实现了 IdentityService
接口并且提供了对于 PublicKey
和远程的 Parties
的 mapping 的存储。它会从 NetworkMapCache
更新中自动被抛出来,然后被用来将 transaction 中暴露出来的 PublicKey
转变为完整的 Party
identities。这个服务也会用来对网络中的 parties 做默认的 JSON mapping,因此允许了 party name 可以被用来作为其他节点的 legal identities 的引用。未来的 Identity 服务会变成持久化的和可扩展的,来允许在对于非相关的 parties 需要隐藏节点的 well-known PublicKey
的时候,在 flows 中可以使用 anonymised session keys。
PersistentKeyManagementService 和 E2ETestKeyManagementService¶
Typical usage of these services is to locate an appropriate
PrivateKey
to complete and sign a verified transaction as part of a
flow. The normal node legal identifier keys are typically accessed via
helper extension methods on the ServiceHub
, but these ultimately delegate
signing to internal PrivateKeys
from the KeyManagementService
. The
KeyManagementService
interface also allows other keys to be
generated if anonymous keys are needed in a flow. Note that this
interface works at the level of individual PublicKey
and internally
matched PrivateKey` pairs, but the signing authority may be represented by a
``CompositeKey
on the NodeInfo
to allow key clustering and
threshold schemes.
通常情况下,使用这些服务是为了找到一个合适的 PrivateKey
来完成一个 flow 中的已确认的 transaction 并提供签名。常规的节点的 legal identifier keys 通常会使用 ServiceHub
中的 helper 扩展方法来访问,但是这最终会委托签名给来自于 KeyManagementService
的内部的 PrivateKeys
。KeyManagementService
接口也允许在一个 flow 中需要将签名 anonymous keys 的时候,允许生成其他的 keys。需要注意的是,这个接口是在每个单独的 PublickKey
的水平上工作的,并且应该同 NodeInfo
上的 PrivateKey
配对,但是签名的 authority 可能会以 NodeInfo 的 CompositeKey
来表示,这样就允许 key clustering 和 threshold schemes。
The PersistentKeyManagementService
is a persistent implementation of
the KeyManagementService
interface that records the key pairs to a
key-value storage table in the database. E2ETestKeyManagementService
is a simple implementation of the KeyManagementService
that is used
to track our KeyPairs
for use in unit testing when no database is
available.
PersistentKeyManagementService
是 KeyManagementService
接口的一个持久化实现,会将秘钥对存储在数据库中的一个 key-value 存储表中。E2ETestKeyManagementService
是一个 KeyManagementService
的简单实现,用来跟踪当没有数据库的时候,我们的 KeyPairs
在单元测试中是如何被使用的。
消息和网络管理服务¶
ArtemisMessagingServer¶
The ArtemisMessagingServer
service is run internally by the Corda
node to host the ArtemisMQ
messaging broker that is used for
reliable node communications. Although the node can be configured to
disable this and connect to a remote broker by setting the
messagingServerAddress
configuration to be the remote broker
address. (The MockNode
used during testing does not use this
service, and has a simplified in-memory network layer instead.) This
service is not exposed to any CorDapp code as it is an entirely internal
infrastructural component. However, the developer may need to be aware
of this component, because the ArtemisMessagingServer
is responsible
for configuring the network ports (based upon settings in node.conf
)
and the service configures the security settings of the ArtemisMQ
middleware and acts to form bridges between node mailbox queues based
upon connection details advertised by the NetworkMapCache
. The
ArtemisMQ
broker is configured to use TLS1.2 with a custom
TrustStore
containing a Corda root certificate and a KeyStore
with a certificate and key signed by a chain back to this root
certificate. These keystores typically reside in the certificates
sub folder of the node workspace. For the nodes to be able to connect to
each other it is essential that the entire set of nodes are able to
authenticate against each other and thus typically that they share a
common root certificate. Also note that the address configuration
defined for the server is the basis for the address advertised in the
NetworkMapCache
and thus must be externally connectable by all nodes
in the network.
ArtemisMessagingServer
服务是在 Corda 节点内部运行的承载 ArtemisMQ
的信息 broker,它用来进行节点间的可信赖的通信。尽管节点可以被配置为不开启这个并且通过将 messagingServerAddress
配置设置为远程的 broker 地址来链接一个远程的 broker。(在测试节点的过程中所使用的 MockNode
并不使用这个服务,取而代之的是一个在内存中的简化的网络层)这个服务不会暴露给任何 CorDapp 代码因为它整体上是一个内部的结构组件。然而,开发者可能需要注意一下这个组件,因为 ArtemisMessagingServer
是负责配置网络的端口(基于 node.conf 中的设置)并且服务还配置了 ArtemisMQ
中间件的安全设置,并基于在 NetworkMapService
中发布的连接详细信息作为一个构成结点邮箱队列的桥梁。ArtemisMQ
broker 被设置使用 TLS1.2 并带有一个自定义的 TrustStore
,其包含了一个 Corda 根证书和一个 KeyStore
,这个 KeyStore
包含了一个证书和一个由能够关联回这个根证书所签名的密钥。这些 KeyStores 通常会贮存在节点的工作路径下的 certificates
子文件夹下。为了能够使节点间互相连接,整体所有的节点间能够彼此互相验证并且相信他们是在共享一个公用的根证书是非常必要的。并且还要注意的是对于 server 定义的地址信息是 NetworkMapService
广播出去的地址的基础,所以必须要能够被网络中的所有节点在外部连接的。
P2PMessagingClient¶
The P2PMessagingClient
is the implementation of the
MessagingService
interface operating across the ArtemisMQ
middleware layer. It typically connects to the local ArtemisMQ
hosted within the ArtemisMessagingServer
service. However, the
messagingServerAddress
configuration can be set to a remote broker
address if required. The responsibilities of this service include
managing the node’s persistent mailbox, sending messages to remote peer
nodes, acknowledging properly consumed messages and deduplicating any
resent messages. The service also handles the incoming requests from new
RPC client sessions and hands them to the CordaRPCOpsImpl
to carry
out the requests.
P2PMessagingClient
是运行在跨 ArtemisMQ
中间件层的 MessagingService
接口的实现。它通常连接到运行在 ArtemisMessagingServer
服务内部的本地 ArtemisMQ
上。然而,如果需要的话 messagingServerAddress
配置项可以被设置到一个远程的 broker 地址。这个服务的责任包括管理节点的持久化邮箱,发送信息到远程的 peer 节点,正确地接受被消费的消息和复制任何重新发布的消息。这个服务还能够处理从新的 RPC 客户端对话中的传入的请求并且将他们传递给 CordaRPCOpsImpl
来处理这个请求。
InMemoryNetworkMapCache¶
The InMemoryNetworkMapCache
implements the NetworkMapCache
interface and is responsible for tracking the identities and advertised
services of authorised nodes provided by the remote
NetworkMapService
. Typical use is to search for nodes hosting
specific advertised services e.g. a Notary service, or an Oracle
service. Also, this service allows mapping of friendly names, or
Party
identities to the full NodeInfo
which is used in the
StateMachineManager
to convert between the PublicKey
, or
Party
based addressing used in the flows/contracts and the
physical host and port information required for the physical
ArtemisMQ
messaging layer.
InMemoryNetworkMapCache
实现了 NetworkMapCache
接口,并且负责跟踪由远程的 NetworkMapService
提供的通过认证的节点的身份信息和提供的服务信息。通常被用来检索带有指定服务的节点,比如一个 Notary service 或者一个 Oracle service。这个服务也允许将友好的名字进行 mapping,或者将一个 Party
的身份信息变成一个完整的 NodeInfo
,这个信息会在 StateMachineManager
中被用来在 PublicKey
之间或者在 flows/contracts 中使用的基于 Party
的地址和物理的 ArtemisMQ
信息层需要的物理的 host 和端口信息进行转换。
存储和持久化相关的服务¶
DBCheckpointStorage¶
The DBCheckpointStorage
service is used from within the
StateMachineManager
code to persist the progress of flows. Thus
ensuring that if the program terminates the flow can be restarted
from the same point and complete the flow. This service should not
be used by any CorDapp components.
DBCheckpointStorage
服务被使用在从 StateMachineManager
代码内部到持久化 flows 的流程中。因此如果程序终止了,要确保 flows 能够从相同的时间点重新开始并且完成整个 flow。这个服务不应该被任何的 CorDapp 组件来使用。
DBTransactionMappingStorage 和 InMemoryStateMachineRecordedTransactionMappingStorage¶
The DBTransactionMappingStorage
is used within the
StateMachineManager
code to relate transactions and flows. This
relationship is exposed in the eventing interface to the RPC clients,
thus allowing them to track the end result of a flow and map to the
actual transactions/states completed. Otherwise this service is unlikely
to be accessed by any CorDapps. The
InMemoryStateMachineRecordedTransactionMappingStorage
service is
available as a non-persistent implementation for unit tests with no database.
DBTransactionMappingStorage
在 StateMachineManager
代码中被用来关联 transactions 和 flows 的。这种关系在对 RPC 客户端的事件接口中被暴露出来,因此允许他们能够跟踪一个 flow 的最终结果并且能够匹配到真正结束了的 transactions/states 上。否则的话这个服务可能不会被任何的 CorDapps 所访问。InMemoryStateMachineRecordedTransactionMappingStorage
服务是作为一个使用不需要数据库的单元测试非持久化的一个实现。
DBTransactionStorage¶
The DBTransactionStorage
service is a persistent implementation of
the TransactionStorage
interface and allows flows read-only
access to full transactions, plus transaction level event callbacks.
Storage of new transactions must be made via the recordTransactions
method on the ServiceHub
, not via a direct call to this service, so
that the various event notifications can occur.
DBTransactionStorage
服务是 TransactionStorage
接口的一个持久化实现,并且允许 flows 以只读权限来访问整体 transactions,加上 transaction 级别的事件回调方法。新的 transactions 的存储必须通过在 ServiceHub
上的 recordTransactions
方法来生成,并不是通过对这个服务的直接调用,所以大量的事件通知会发生。
NodeAttachmentService¶
The NodeAttachmentService
provides an implementation of the
AttachmentStorage
interface exposed on the ServiceHub
allowing
transactions to add documents, copies of the contract code and binary
data to transactions. The service is also interfaced to by the web server,
which allows files to be uploaded via an HTTP post request.
NodeAttachmentService
提供了一个在 ServiceHub
上暴露的 AttachmentStorage
接口的实现,它允许 transactions 可以添加文档,contract code 的拷贝和对于 transaction 的二进制数据。这个服务同样也被 web server 作为接口,来允许文件能够通过 HTTP post 请求来上传。
Flow framework 和 event scheduling services¶
StateMachineManager¶
The StateMachineManager
is the service that runs the active
flows of the node whether initiated by an RPC client, the web
interface, a scheduled state activity, or triggered by receipt of a
message from another node. The StateMachineManager
wraps the
flow code (extensions of the FlowLogic
class) inside an
instance of the FlowStateMachineImpl
class, which is a
Quasar
Fiber
. This allows the StateMachineManager
to suspend
flows at all key lifecycle points and persist their serialized state
to the database via the DBCheckpointStorage
service. This process
uses the facilities of the Quasar
Fibers
library to manage this
process and hence the requirement for the node to run the Quasar
java instrumentation agent in its JVM.
StateMachineManager
是运行节点的 active flows 的服务,这些 flows 可能是由一个 RPC 客户端, web 接口,一个 scheduled state activity 来初始化的,或者是由通过接收到从其他节点发来的一个消息所出发。StateMachineManager
包装了在一个 FlowStateMachineImpl
类实例里的 flow 代码(对 FlowLogic
类的扩展),这个类是一个 Quasar
Fiber
。这个允许 StateMachineManager
可以在生命周期的所有关键时间点挂起 flows,并将他们序列化的 state 通过 DBCheckpointStorage
服务持久化到数据库中。这个过程使用了 Quasar
Fibers
类库的协助来管理这个流程,因此这就要求节点需要在它的 JVM 中运行 Quasar
java instrumentation agent。
In operation the StateMachineManager
is typically running an active
flow on its server thread until it encounters a blocking, or
externally visible operation, such as sending a message, waiting for a
message, or initiating a subFlow
. The fiber is then suspended
and its stack frames serialized to the database, thus ensuring that if
the node is stopped, or crashes at this point the flow will restart
with exactly the same action again. To further ensure consistency, every
event which resumes a flow opens a database transaction, which is
committed during this suspension process ensuring that the database
modifications e.g. state commits stay in sync with the mutating changes
of the flow. Having recorded the fiber state the
StateMachineManager
then carries out the network actions as required
(internally one flow message exchanged may actually involve several
physical session messages to authenticate and invoke registered
flows on the remote nodes). The flow will stay suspended until
the required message is returned and the scheduler will resume
processing of other activated flows. On receipt of the expected
response message from the network layer the StateMachineManager
locates the appropriate flow, resuming it immediately after the
blocking step with the received message. Thus from the perspective of
the flow the code executes as a simple linear progression of
processing, even if there were node restarts and possibly message
resends (the messaging layer deduplicates messages based on an id that
is part of the checkpoint).
StateMachineManager
通常会在它的 server 线程上运行一个 flow 直到遇到一个障碍,或者一个外部可见的操作,比如发送一个消息,等待一个消息或者初始一个 subflow
。接下来 fiber 会被挂起,并且它的 stack frames 会被序列化到数据库中,因此确保在节点停止运行或者崩溃的时候,flow 能够完全按照原来要执行的动作来重新启动。为了进一步确保一致性,恢复一个 flow 的每个事件都会打开一个数据库事务,这个事务会在该挂起流程中被提交,来确保对数据库所做的修改(比如state 的提交)始终能够跟 flow 的变化保持同步。有了记录下来的 fiber state,StateMachineManager
会按照需求进行网络活动(内部的一个 flow 信息交换实际上可能要涉及多个物理的对话消息来在远程节点上认证和调用注册的 flows)。flow 会一直保持挂起的状态直到所需的消息返回来,然后 scheduler 会重新启动其他的有效的 flows。当收到来自网络层的期望的反馈信息后,StateMachineManager
会加载相关的 flow,在阻塞的步骤之后带着接收到的消息立即重新启动。因此从 flow 的角度来说,代码作为一个简单流程的线性累加而被执行,即使会有节点的重启和可能出现的重发消息(消息层会基于 checkpoint 的一个 id 来去重)。
The StateMachineManager
service is not directly exposed to the
flows, or contracts themselves.
StateMachineManager
服务不是直接暴露给 flows 或者 contracts 本身的。
NodeSchedulerService¶
The NodeSchedulerService
implements the SchedulerService
interface and monitors the Vault updates to track any new states that
implement the SchedulableState
interface and require automatic
scheduled flow initiation. At the scheduled due time the
NodeSchedulerService
will create a new flow instance passing it
a reference to the state that triggered the event. The flow can then
begin whatever action is required. Note that the scheduled activity
occurs in all nodes holding the state in their Vault, it may therefore
be required for the flow to exit early if the current node is not
the intended initiator.
NodeSchedulerService
实现了 SchedulerService
接口并且会监控 Vault 的更新,以此来跟踪任何的实现了 SchedulableState
接口的新的 states,并且会请求自动 scheduled flow 的初始化。在预定的结束时间, NodeSchedulerService
将会创建一个新的 flow 实例,并将触发这次事件的 state 的引用传递给它。这个 flow 接下来可以开始任何所需的动作。注意:这个预约的活动会在所有在他们的 Vault 中存有该 state 的节点上发生, 那么如果该节点不是预期的初始者的话,这可能就会要求这个 flow 会尽早地结束。
Vault 相关的服务¶
NodeVaultService¶
The NodeVaultService
implements the VaultService
interface to
allow access to the node’s own set of unconsumed states. The service
does this by tracking update notifications from the
TransactionStorage
service and processing relevant updates to delete
consumed states and insert new states. The resulting update is then
persisted to the database. The VaultService
then exposes query and
event notification APIs to flows and CorDapp services to allow them
to respond to updates, or query for states meeting various conditions to
begin the formation of new transactions consuming them. The equivalent
services are also forwarded to RPC clients, so that they may show
updating views of states held by the node.
NodeVaultService
实现了 VaultService
接口来允许访问节点自己的一系列未被消费的 states。这个服务是通过跟踪从 TransactionStorage
服务传回的更新通知,然后会进行相关的更新来删除消费掉的 states 并插入新的 states 的方法实现这样的功能的。最终的更新会被持久化到数据库中。VaultService
向 flows 和 CorDapp 服务暴露了查询和事件通知的 APIs,以此来允许他们对更新做出反馈,或者根据不同的查询条件查询 states,来形成新的消费他们的 transactions。同样的服务也会被转发给 RPC 客户端,所以他们会显示该节点所持有的 states 的 updating views。
NodeSchemaService 和 HibernateObserver¶
The HibernateObserver
runs within the node framework and listens for
vault state updates, the HibernateObserver
then uses the mapping
services of the NodeSchemaService
to record the states in auxiliary
database tables. This allows Corda state updates to be exposed to
external legacy systems by insertion of unpacked data into existing
tables. To enable these features the contract state must implement the
QueryableState
interface to define the mappings.
HibernateObserver
在节点的 framework 内运行并且会监听 vault state 的更新,然后 HibernateObserver
会使用 NodeSchemaService
的 mapping service 来将 states 记录到辅助的数据库表中。这个允许 Corda state 更新通过将未打包的数据插入到已经存在的数据库的表的方式被暴露给外部的系统。要想启用这些功能,contract state 必须要实现 QueryableState
接口来定义这些 mappings。
Corda Web Server¶
A simple web server is provided that embeds the Jetty servlet container. The Corda web server is not meant to be used for real, production-quality web apps. Instead it shows one example way of using Corda RPC in web apps to provide a REST API on top of the Corda native RPC mechanism.
Corda 提供了一个简单的 web server,它内嵌了一个 Jetty servlet 容器。Corda web server 并不意味着适用于真是的生产环境的 web apps。相反的,它只是展示了如何在 web apps 中基于 Corda 自身的 RPC 机制,使用 Corda RPC 来提供一个 REST API。
注解
The Corda web server may be removed in future and replaced with sample specific webapps using a standard framework like Spring Boot.
注解
Corda web server 可能会在未来被移除,并且会使用一个标准的 framework 比如 Spring Boot 的一个例子 webapp 来替代。
网络和消息¶
Corda uses AMQP/1.0 over TLS between nodes which is currently implemented using Apache Artemis, an embeddable message queue broker. Building on established MQ protocols gives us features like persistence to disk, automatic delivery retries with backoff and dead-letter routing, security, large message streaming and so on.
Corda 在 TLS 之上使用 AMQP/1.0 实现节点间的通信,当前是使用 Apache Artemis,一个可嵌入的 message queue broker 来实现的。构建在创建好的 MQ 协议上给了我们诸如持久化到硬盘,带有 backoff 和 dead-letter routing 的自动重试发送,安全和大的消息流等功能。
Artemis is hidden behind a thin interface that also has an in-memory only implementation suitable for use in unit tests and visualisation tools.
Artemis 隐藏在一个轻接口后边,这个接口还具有一个只在内存中存储的实现,这个在单元测试(unit tests)和可视化工具(visualisation tools)可以使用。
注解
A future version of Corda will allow the MQ broker to be split out of the main node and run as a separate server. We may also support non-Artemis implementations via JMS, allowing the broker to be swapped out for alternative implementations.
注解
未来版本的 Corda 将会允许 MQ broker 可以从主节点中分离出来并且像一个独立的 server 那样运行。我们可能也会支持通过 JMS 的 non-Arteis 实现,允许 broker 能够有其他的实现。
There are multiple ways of interacting with the network. When writing an application you typically won’t use the messaging subsystem directly. Instead you will build on top of the flow framework, which adds a layer on top of raw messaging to manage multi-step flows and let you think in terms of identities rather than specific network endpoints.
这有很多种方式跟网络进行互动。当编写一个应用的时候,你通常不会直接地使用一个消息子系统。你会将它构建在 flow 框架之上,这会在原始消息(raw messaging)之上添加一层来管理多步的 flows 并且让你思考关于身份(identities)而不是某个特定的网络 endpoints。
网络地图服务¶
Supporting the messaging layer is a network map service, which is responsible for tracking public nodes on the network.
网络地图服务支持了消息层,它负责跟踪网络中的公共节点。
Nodes have an internal component, the network map cache, which contains a copy of the network map (which is backed up in the database to persist that information across the restarts in case the network map server is down). When a node starts up its cache fetches a copy of the full network map (from the server or from filesystem for development mode). After that it polls on regular time interval for network map and applies any related changes locally. Nodes do not automatically deregister themselves, so (for example) nodes going offline briefly for maintenance are retained in the network map, and messages for them will be queued, minimising disruption.
节点具有一个内部的组件,网络地图缓存,它包括了网络地图的一个副本(只是一个文件)。当一个节点启动的时候,它的缓存会获取整个网络地图的一个副本,并且会请求当网络地图有变化的时候被通知。然后节点会将自己注册到网络地图服务中,这个服务会通知所有订阅的节点有个新的节点加入到网络中来。节点不会自动地将自己从注册中移除,所以当节点由于维护等原因离线之后,他们仍旧会在网络地图中存在,发送给他们的消息会被放入队列中,以最小化造成的损坏。
Additionally, on every restart and on daily basis nodes submit signed NodeInfo
s to the map service. When network map gets
signed, these changes are distributed as new network data. NodeInfo
republishing is treated as a heartbeat from the node,
based on that network map service is able to figure out which nodes can be considered as stale and removed from the network
map document after eventHorizon
time.
另外,在每次重启以及每天,节点都会像这个地图服务提交一个签过名的 NodeInfo
。当网络地图得到这些签过名的信息之后,这些变化会作为网络数据被分发出去。NodeInfo
的重新发布就像来自于节点的一次心跳一样,基于它网络地图服务就能够区别出来哪些节点能够被认为是稳定的,哪些可以在 eventHorizon
时间之后就可以从网络地图文件中移除出去了。
消息队列¶
The node makes use of various queues for its operation. The more important ones are described below. Others are used for maintenance and other minor purposes.
节点运行的时候会使用不同的消息队列。比较重要的都在下边进行了描述。其他的队列会被用来在维护和其他比较小的一些活动中。
p2p.inbound.$identity : | |
---|---|
The node listens for messages sent from other peer nodes on this queue. Only clients who are authenticated to be nodes on the same network are given permission to send. Messages which are routed internally are also sent to this queue (e.g. two flows on the same node communicating with each other). 节点会在这个消息队列中监听由其他节点发送过来的消息。只有同一网络中已经授权的节点才会有权限发送消息。在内部路由的消息也会发送到这个队列中(比如一个节点中的两个 flows 互相联系)。 |
|
internal.peers.$identity : | |
These are a set of private queues only available to the node which it uses to route messages destined to other peers.
The queue name ends in the base 58 encoding of the peer’s identity key. There is at most one queue per peer. The broker
creates a bridge from this queue to the peer’s 这些是一系列的私有队列,是节点用来向其他节点发送消息的。队列的名字是以 base 58 加密的目标节点的身份密钥的信息作为结尾。这里基本上是每个 peer 节点一个队列。Broker 会通过网络地图服务来查找 peer 节点的网络地址,然后跟对方节点的 p2p.inbound.$identity 队列建立一个桥连接(bridge)。 |
|
internal.services.$identity : | |
These are private queues the node may use to route messages to services. The queue name ends in the base 58 encoding of the service’s owning identity key. There is at most one queue per service identity (but note that any one service may have several identities). The broker creates bridges to all nodes in the network advertising the service in question. When a session is initiated with a service counterparty the handshake is pushed onto this queue, and a corresponding bridge is used to forward the message to an advertising peer’s p2p queue. Once a peer is picked the session continues on as normal. 这些是节点可能会用来向服务(services)发送消息的私有队列。队列的名字是以 base 58 加密的服务的身份密钥的信息作为结尾。这里每个服务标识(identity)最多是一个队列(但是要注意的是一个服务可能会有多个标识)。Broker 会跟所有的网络中提供服务的节点创建桥连接(bridges)。当跟服务合作方建立一个回话之后,handshake 会被发送到这个队列中来,并且一个对应的桥连接会被用来将消息转发给目标节点的 p2p 队列。当一个 peer 被选择之后,会话将会正常进行。 |
|
rpc.server : | RPC clients send their requests here, and it’s only open for sending by clients authenticated as RPC users. RPC 客户端通过该队列发送请求,这个队列也仅仅对被授权为 RPC 用户的客户端才可以访问。 |
rpc.client.$user.$random : | |
RPC clients are given permission to create a temporary queue incorporating their username ( RPC 客户端被授权使用他们的用户名( |
安全¶
Clients attempting to connect to the node’s broker fall in one of four groups:
- Anyone connecting with the username
SystemUsers/Node
orSystemUsers/NodeRPC
is treated as the node hosting the brokers, or a logical component of the node. The TLS certificate they provide must match the one broker has for the node. If that’s the case they are given full access to all valid queues, otherwise they are rejected. - Anyone connecting with the username
SystemUsers/Peer
is treated as a peer on the same Corda network as the node. Their TLS root CA must be the same as the node’s root CA – the root CA is the doorman of the network and having the same root CA implies we’ve been let in by the same doorman. If they are part of the same network then they are only given permission to send to ourp2p.inbound.$identity
queue, otherwise they are rejected. - Every other username is treated as a RPC user and authenticated against the node’s list of valid RPC users. If that is successful then they are only given sufficient permission to perform RPC, otherwise they are rejected.
- Clients connecting without a username and password are rejected.
客户端尝试连接节点的 broker 会出自四个 groups 之一:
- 任何使用用户名
SystemUsers/Node
或者SystemUsers/NodeRPC
连接的被认为是运行着 broker 的节点,或者是节点的一个逻辑组件(logical component)。他们所提供的 TLS 证书必须要跟 broker 对于该节点的证书匹配。如果匹配上的话,那么他们会被授权访问所有可用的队列,否则的话会被拒绝访问。 - 任何使用用户名
SystemUsers/Peer
连接的被认为是在同一 Corda 网络中的 peer 节点。他们的 TLS root CA 必须同节点的 root CA 相同。Root CA 是网络的 doorman,并且具有相同的 root CA 意味着我们是被相同的 doorman 准入进入到此网络的。如果他们是同一网络的组成节点,那么他们仅仅被授权向我们的p2p.inbound.$identity
队列发送消息,否则的话会被拒绝。 - 任何其他的用户名会被认为是一个 RPC 用户,会根据节点的有效的 RPC 用户列表来进行授权。如果授权成功,那么他们仅仅会给与有效的权力来执行 RPC,否则会被拒绝。
- 没有用户名和密码的客户端会被拒绝。
Artemis provides a feature of annotating each received message with the validated user. This allows the node’s messaging
service to provide authenticated messages to the rest of the system. For the first two client types described above the
validated user is the X.500 subject of the client TLS certificate. This allows the flow framework to authentically determine
the Party
initiating a new flow. For RPC clients the validated user is the username itself and the RPC framework uses
this to determine what permissions the user has.
Artemis 提供了一个功能,为每一个收到的消息田间有效的用户的注解。这个允许节点的消息服务为系统的其他部分提供被授权的消息。对于上边提到的前两种客户端类型,被验证的用户是客户端的 TLS 证书的 X.500 subject。这个允许 flow 框架来判断初始一个新 flow 的 Party
的权限。对于 RPC 客户端,被验证的用户是指用户名本身,RPC 框架用这个来决定这个用户应该有什么权限。
The broker also does host verification when connecting to another peer. It checks that the TLS certificate subject matches with the advertised X.500 legal name from the network map service.
当连接其他 peer 的时候,broker 也会进行验证。他会检查 TLS 证书 subject 同网络地图服务中记录的 X.500 legal name 相匹配。
实现的细节¶
- The components of the system that need to communicate and authenticate each other are:
- The Artemis P2P broker (currently runs inside the node’s JVM process, but in the future it will be able to run as a separate server):
- Opens Acceptor configured with the doorman’s certificate in the trustStore and the node’s SSL certificate in the keyStore.
- The Artemis RPC broker (currently runs inside the node’s JVM process, but in the future it will be able to run as a separate server):
- Opens “Admin” Acceptor configured with the doorman’s certificate in the trustStore and the node’s SSL certificate in the keyStore.
- Opens “Client” Acceptor with the SSL settings configurable. This acceptor does not require SSL client-auth.
- The current node hosting the brokers:
- Connects to the P2P broker using the
SystemUsers/Node
user and the node’s keyStore and trustStore. - Connects to the “Admin” Acceptor of the RPC broker using the
SystemUsers/NodeRPC
user and the node’s keyStore and trustStore.
- Connects to the P2P broker using the
- RPC clients (third party applications that need to communicate with the node):
- Connect to the “Client” Acceptor of the RPC broker using the username/password provided by the node’s admin. The client verifies the node’s certificate using a trustStore provided by the node’s admin.
- Peer nodes (other nodes on the network):
- Connect to the P2P broker using the
SystemUsers/Peer
user and a doorman signed certificate. The authentication is performed based on the root CA.
- Connect to the P2P broker using the
组件类库¶
合约目录¶
There are a number of contracts supplied with Corda, which cover both core functionality (such as cash on ledger) and
provide examples of how to model complex contracts (such as interest rate swaps). There is also a Dummy
contract.
However it does not provide any meaningful functionality, and is intended purely for testing purposes.
在 Corda 中提供了很多类型的合约,既包括核心功能(比如现金和账本),也提供了应该如何构建复杂合约的例子(日股汇率交换)。还包括 Dummy
合约。然而这里并没有提供任何有意义的功能,所以这纯是为了测试的目的。
现金¶
The Cash
contract’s state objects represent an amount of some issued currency, owned by some party. Any currency
can be issued by any party, and it is up to the recipient to determine whether they trust the issuer. Generally nodes
are expected to have criteria (such as a whitelist) that issuers must fulfil for cash they issue to be accepted.
Cash
合约的 states 对象代表了一些发行的货币和谁拥有这些货币。然和节点都可以发行任何的货币,所以这个取决于接收方来决定他们是否新人货币的发行方。总体来说,节点是应该具有一些审核条件(就像是白名单),货币发行方必须要满足这些条件他们发行的货币才会被接受。
Cash state objects implement the FungibleAsset
interface, and can be used by the commercial paper and obligation
contracts as part of settlement of an outstanding debt. The contracts’ verification functions require that cash state
objects of the correct value are received by the beneficiary as part of the settlement transaction.
现金的 state 对象实现了 FungibleAsset
接口,并且可以被商业票据(commercial paper)和债券(obligation)合约作为一个借款清算的一部分被使用。合约的校验方法要求,作为清算 transaction 的一部分,具有正确价值的现金 state 对象被收款人接收到了。
The cash contract supports issuing, moving and exiting (destroying) states. Note, however, that issuance cannot be part of the same transaction as other cash commands, in order to minimise complexity in balance verification.
现金合约支持发行(issuing),转移(moving)和销毁(exiting)states。注意,发行的操作不能够跟其他现金命令放在同一个 transaction 中,以最小化余额验证的难度。
Cash shares a common superclass, OnLedgerAsset
, with the Commodity contract. This implements common behaviour of
assets which can be issued, moved and exited on chain, with the subclasses handling asset-specific data types and
behaviour.
现金合约同商品合约共享了一个通用的 superclass OnLedgerAsset。它实现了在区块链上可以被发行、转移和销毁的资产(assets)的常见行为,它的子类会处理特定资产数据类型和行为。
注解
Corda supports a pluggable cash selection algorithm by implementing the CashSelection
interface.
The default implementation uses an H2 specific query that can be overridden for different database providers.
Please see CashSelectionH2Impl
and its associated declaration in
META-INF\services\net.corda.finance.contracts.asset.CashSelection
注解
Corda 支持通过实现 CashSelection
接口的方式来支持可插拔(pluggable)的现金选择算法。默认的实现是使用一个特定的 H2 查询,这个查询对于不同的数据库提供商(database provider)都可以被重写。请查看 META-INF\services\net.corda.finance.contracts.asset.CashSelection
路径下的 CashSelectionH2Impl
和相关的声明。
商品¶
The Commodity
contract is an early stage example of a non-currency contract whose states implement the FungibleAsset
interface. This is used as a proof of concept for non-cash obligations.
Cmmodity
合约是一个非货币合约的早期阶段的例子,它的 states 实现了 FungibleAsset
接口。这个被用来作为对于非现金的债务的一个概念验证(proof of concept)。
商业票据¶
CommercialPaper
is a very simple obligation to pay an amount of cash at some future point in time (the maturity
date), and exists primarily as a simplified contract for use in tutorials. Commercial paper supports issuing, moving
and redeeming (settling) states. Unlike the full obligation contract it does not support locking the state so it cannot
be settled if the obligor defaults on payment, or netting of state objects. All commands are exclusive of the other
commercial paper commands. Use the Obligation
contract for more advanced functionality.
CommercialPaper
是在将来支付一定现金的一个很简单的债务,也是在教程中被使用的一个简化的合约。商业票据支持发行、转移和履约(结算) states。跟完全的债务合约不同,他不支持将 state 锁住,所以如果债务人拒绝支付或者 netting of state objects,它是不会被清算的。每个商业票据的命令都是独有的。使用 Obligation
合约来做一些更高级的功能。
利率交换¶
The Interest Rate Swap (IRS) contract is a bilateral contract to implement a vanilla fixed / floating same currency interest rate swap. In general, an IRS allows two counterparties to modify their exposure from changes in the underlying interest rate. They are often used as a hedging instrument, convert a fixed rate loan to a floating rate loan, vice versa etc.
利率交换合约是一个双边的合约,实现一个 vanilla 固定 / 浮动相同的货币利率交换。大体上说,一个 IRS 允许了交易双方从对底层利率的改的东来改变他们的 exposure。 他们经常被用来作为套期工具(hedging instrument),将一个固定利率的贷款转换为一个浮动利率的贷款,或者做一个相反的操作。
See “Interest rate swaps” for full details on the IRS contract.
查看 “Interest rate swaps” 了解关于 IRS 合约的详细内容。
债务¶
The obligation contract’s state objects represent an obligation to provide some asset, which would generally be a
cash state object, but can be any contract state object fulfilling the FungibleAsset
interface, including other
obligations. The obligation contract uses objects referred to as Terms
to group commands and state objects together.
Terms are a subset of an obligation state object, including details of what should be paid, when, and to whom.
债务合约的 state 对象代表了一个需要提供某些资产的债务,通常会是一个现金 state 对象,但是也可以是任何满足 FungibleAsset
接口的合约 state 对象,包括其他类型的债务。债务合约使用的对象是作为条款(Terms)来将命令(commands)和 state 对象组合在一起。条款是一个债务 state 对象的子集,包括了什么需要被支付,什么时间以及支付给谁的详细信息。
Obligation state objects can be issued, moved and exited as with any fungible asset. The contract also supports state
object netting and lifecycle changes (marking the obligation that a state object represents as having defaulted, or
reverting it to the normal state after marking as having defaulted). The Net
command cannot be included with any
other obligation commands in the same transaction, as it applies to state objects with different beneficiaries, and
as such applies across multiple terms.
债务 state 对象像其他任何的 fungible asset 一样可以被发行、转移和清除。合约还支持 state 对象 netting 和生命周期变动(marking the obligation that a state object represents as having defaulted, or reverting it to the normal state after marking as having defaulted)。Net
命令不能和其他任何债务命令一同包含在同一个 transaction 中,因为它会被应用到不同受益人的 state 对象中,还因为这个会应用到不同的条款中。
All other obligation contract commands specify obligation terms (what is to be delivered, by whom and by when)
which are used as a grouping key for input/output states and commands. Issuance and lifecycle commands are mutually
exclusive of other commands (move/exit) which apply to the same obligation terms, but multiple commands can be present
in a single transaction if they apply to different terms. For example, a contract can have two different Issue
commands as long as they apply to different terms, but could not have an Issue
and a Net
, or an Issue
and
Move
that apply to the same terms.
Netting of obligations supports close-out netting (which can be triggered by either obligor or beneficiary, but is limited to bilateral netting), and payment netting (which requires signatures from all involved parties, but supports multilateral netting).
金融模型¶
Corda provides a large standard library of data types used in financial applications and contract state objects. These provide a common language for states and contracts.
Corda 提供了大量的在金融应用中使用的数据类型的标准库和合约 state 对象。这个对于 states 和 合约提供了一个通用的语言。
数量¶
The Amount class is used to represent an amount of
some fungible asset. It is a generic class which wraps around a type used to define the underlying product, called
the token. For instance it can be the standard JDK type Currency
, or an Issued
instance, or this can be
a more complex type such as an obligation contract issuance definition (which in turn contains a token definition
for whatever the obligation is to be settled in). Custom token types should implement TokenizableAssetInfo
to allow the
Amount
conversion helpers fromDecimal
and toDecimal
to calculate the correct displayTokenSize
.
Amount 类被用来代表某种 fungible asset 的数量。这是一个 generic 类,被提升成为一种类型来定义底层的叫做 token 的产品。例如他可以是标准的 JDK 的 Currency
类型,或者是一个被 发行``(Issued)的实例,或者是一个更复杂的类型,比如作为一个债务合约发行的定义(这反过来包括了一个 token 类型的定义,这个 token 会用来结算这笔债务)。自定义的 token 类型应该实现 ``TokenizableAssetInfo
来允许 Amount
转换 helpers fromDecimal
和 toDecimal
来计算正确的 displayTokenSize
。
注解
Fungible is used here to mean that instances of an asset is interchangeable for any other identical instance, and that they can be split/merged. For example a £5 note can reasonably be exchanged for any other £5 note, and a £10 note can be exchanged for two £5 notes, or vice-versa.
注解
这里的 fungible 代表着能够跟其他的唯一的实例进行互换,并且他们还可以被拆分/合并。例如一个 £5 的纸币(note)可以被合理地转换成任何其他的 £5 纸币,并且一个 £10 的纸币还可以被换成两个 £5 的纸币,反过来也可以。
Here are some examples:
下边是一些例子:
// A quantity of some specific currency like pounds, euros, dollars etc.
Amount<Currency>
// A quantity of currency that is issued by a specific issuer, for instance central bank vs other bank dollars
Amount<Issued<Currency>>
// A quantity of a product governed by specific obligation terms
Amount<Obligation.Terms<P>>
Amount
represents quantities as integers. You cannot use Amount
to represent negative quantities,
or fractional quantities: if you wish to do this then you must use a different type, typically BigDecimal
.
For currencies the quantity represents pennies, cents, or whatever else is the smallest integer amount for that currency,
but for other assets it might mean anything e.g. 1000 tonnes of coal, or kilowatt-hours. The precise conversion ratio
to displayable amounts is via the displayTokenSize
property, which is the BigDecimal
numeric representation of
a single token as it would be written. Amount
also defines methods to do overflow/underflow checked addition and subtraction
(these are operator overloads in Kotlin and can be used as regular methods from Java). More complex calculations should typically
be done in BigDecimal
and converted back to ensure due consideration of rounding and to ensure token conservation.
Amount
会以整数的形式来代表数量。你不可以用 Amount
来表示一个负的数量:如果你想要这么做的话那你比需要使用另外一种类型,通常应该是 BigDecimal
。对于货币来说,数量代表着便士(pennies),美分(cents)或者任何对于该货币是最小的整数数量,但是对于其他类型的资产,这可能会代表任何东西,比如 1000 公吨的煤,或者是千瓦小时。对于可显示的数量的精确转换率是要通过 displayTokenSize
属性来实现的,它是 BigDecimal
对于一个单独的 token 的数字展现。Amount
也定义了方法来对 overflow/underflow checked 加法和减法(这个在 Kotlin 中是操作符重载 operator overloads,在 Java 追踪可以向常规方法那样被使用)。更复杂的计算通常应该使用 BigDecial
来做然后再转换回来,以确保进位(rounding)的考虑和确保 token 的转换。
Issued
refers to a product (which can be cash, a cash-like thing, assets, or generally anything else that’s
quantifiable with integer quantities) and an associated PartyAndReference
that describes the issuer of that contract.
An issued product typically follows a lifecycle which includes issuance, movement and exiting from the ledger (for example,
see the Cash
contract and its associated state and commands)
Issued
指的是一个产品(这个产品可以是现金、和现金类似的事物,资产或者任何可以用整数来代表的可数量化的事物)和一个相关的 PartyAndReference
,这个 reference 代表了该合约的发行者。一个被发行的产品通常会遵循一个生命周期,其中会包括发行,转移和从账本中消除(比如,查看 Cash
合约和它相关的 state 和 commands)
To represent movements of Amount
tokens use the AmountTransfer
type, which records the quantity and perspective
of a transfer. Positive values will indicate a movement of tokens from a source
e.g. a Party
, or CompositeKey
to a destination
. Negative values can be used to indicate a retrograde motion of tokens from destination
to source
. AmountTransfer
supports addition (as a Kotlin operator, or Java method) to provide netting
and aggregation of flows. The apply
method can be used to process a list of attributed Amount
objects in a
List<SourceAndAmount>
to carry out the actual transfer.
为了展现 Amount
的转移,tokens 使用 AmountTransfer
类型,它记录了数量和一个转移的目的。正数的数量值表示 tokens 从一个 source``(比如一个 ``Party
或者 CompositeKey
)转移到了 destination
。负数值可以用来表示从 destination
向 source
的一个颠倒的转移。AmountTransfer
支持更多的(作为一个 Kotlin 的操作符,或者 Java 的方法)来提供 netting 和聚合 flows。apply
方法可以用来处理在一个 List<SourceAndAmount>
中定义好的 Amount
对象列表来开始真正的转移。
金融 states¶
In additional to the common state types, a number of interfaces extend ContractState
to model financial state such as:
LinearState
- A state which has a unique identifier beyond its StateRef and carries it through state transitions. Such a state cannot be duplicated, merged or split in a transaction: only continued or deleted. A linear state is useful when modelling an indivisible/non-fungible thing like a specific deal, or an asset that can’t be split (like a rare piece of art).
DealState
- A LinearState representing an agreement between two or more parties. Intended to simplify implementing generic protocols that manipulate many agreement types.
FungibleAsset
- A FungibleAsset is intended to be used for contract states representing assets which are fungible, countable and issued by a specific party. States contain assets which are equivalent (such as cash of the same currency), so records of their existence can be merged or split as needed where the issuer is the same. For instance, dollars issued by the Fed are fungible and countable (in cents), barrels of West Texas crude are fungible and countable (oil from two small containers can be poured into one large container), shares of the same class in a specific company are fungible and countable, and so on.
在常见的 state 类型以外,还有很多扩展了 ContractState
的接口来作为金融 state,比如:
LinearState
:这种类型的 states 在它的 StateRef 之上具有一个唯一的标识,这个标识在不同的 state 交换中一直存在。这种类型的 states 在一个 transaction 中不能够被复制、合并或者拆分:只能是或者继续使用或者删除。Linear state 对于生成一个不可分割(indivisible)/non-fungible 的事物(比如一笔指定的交易,或者一个不可拆分的资产,就像一个稀少的艺术品)是非常有用的。
DealState
:LinearState 代表一个在两方或多方之间的一个协议。目的是要简化实现通用的协议(generic protocols),该协议可以处理很多种协议类型。
FungibleAsset
:一个 FungibleAsset 表示的是可以替换的,可计算的并且可以被制定的一方发行的资产。States 包含了相同的资产(比如同币种的现金),所以如果他们的额发行者是同一个的话,他们可以被合并或者被拆分。比如政府发行的美元是可以替换(fungible)并计算(countable)的(用美分),桶装的西德克萨斯石油是可以替换并计算的(油可以从两个小的容器倒进一个大的容器中),某个公司的同种类别的股份是可替换并计算的,等等。
The following diagram illustrates the complete Contract State hierarchy:
下边的图表展示了整个 Contract State 的结构:

Note there are currently two packages, a core library and a finance model specific library. Developers may re-use or extend the Finance types directly or write their own by extending the base types from the Core library.
注意当前有两个包,一个核心类库和一个金融模型特殊的类库。开发者可以直接重用或者扩展金融类型,或者通过从核心类库中扩展基本的类型来编写自己的类型。
Interest rate swaps¶
The Interest Rate Swap (IRS) Contract (source: IRS.kt, IRSUtils.kt, IRSExport.kt) is a bilateral contract to implement a vanilla fixed / floating same currency IRS.
In general, an IRS allows two counterparties to modify their exposure from changes in the underlying interest rate. They are often used as a hedging instrument, convert a fixed rate loan to a floating rate loan, vice versa etc.
The IRS contract exists over a period of time (normally measurable in years). It starts on its value date (although this is not the agreement date), and is considered to be no longer active after its maturity date. During that time, there is an exchange of cash flows which are calculated by looking at the economics of each leg. These are based upon an amount that is not actually exchanged but notionally used for the calculation (and is hence known as the notional amount), and a rate that is either fixed at the creation of the swap (for the fixed leg), or based upon a reference rate that is retrieved during the swap (for the floating leg). An example reference rate might be something such as ‘LIBOR 3M’.
The fixed leg has its rate computed and set in advance, whereas the floating leg has a fixing process whereas the rate for the next period is fixed with relation to a reference rate. Then, a calculation is performed such that the interest due over that period multiplied by the notional is paid (normally at the end of the period). If these two legs have the same payment date, then these flows can be offset against each other (in reality there are normally a number of these swaps that are live between two counterparties, so that further netting is performed at counterparty level).
The fixed leg and floating leg do not have to have the same period frequency. In fact, conventional swaps do not have the same period.
Currently, there is no notion of an actual payment or obligation being performed in the contract code we have written; it merely represents that the payment needs to be made.
Consider the diagram below; the x-axis represents time and the y-axis the size of the leg payments (not to scale), from the view of the floating leg receiver / fixed leg payer. The enumerated documents represent the versions of the IRS as it progresses (note that, the first version exists before the value date), the dots on the “y=0” represent an interest rate value becoming available and then the curved arrow indicates to which period the fixing applies.

Two days (by convention, although this can be modified) before the value date (i.e. at the start of the swap) in the red period, the reference rate is observed from an oracle and fixed - in this instance, at 1.1%. At the end of the accrual period, there is an obligation from the floating leg payer of 1.1% * notional amount * days in the accrual period / 360. (Also note that the result of “days in the accrual period / 360” is also known as the day count factor, although other conventions are allowed and will be supported). This amount is then paid at a determined time at the end of the accrual period.
Again, two working days before the blue period, the rate is fixed (this time at 0.5% - however in reality, the rates would not be so significantly different), and the same calculation is performed to evaluate the payment that will be due at the end of this period.
This process continues until the swap reaches maturity and the final payments are calculated.
Creating an instance and lifecycle¶
There are two valid operations on an IRS. The first is to generate via the Agree
command (signed by both parties)
and the second (and repeated operation) is Fix
to apply a rate fixing.
To see the minimum dataset required for the creation of an IRS, refer to IRSTests.kt
which has two examples in the
function IRSTests.createDummyIRS()
. Implicitly, when the agree function is called, the floating leg and fixed
leg payment schedules are created (more details below) and can be queried.
Once an IRS has been agreed, then the only valid operation is to apply a fixing on one of the entries in the
Calculation.floatingLegPaymentSchedule
map. Fixes do not have to be applied in order (although it does make most
sense to do them so).
Examples of applying fixings to rates can been seen in IRSTests.generateIRSandFixSome()
which loops through the next
fixing date of an IRS that is created with the above example function and then applies a fixing of 0.052% to each floating
event.
Currently, there are no matured, termination or dispute operations.
Technical details¶
The contract itself comprises of 4 data state classes, FixedLeg
, FloatingLeg
, Common
and Calculation
.
Recall that the platform model is strictly immutable. To further that, between states, the only class that is modified
is the Calculation
class.
The Common
data class contains all data that is general to the entire swap, e.g. data like trade identifier,
valuation date, etc.
The Fixed and Floating leg classes derive from a common base class CommonLeg
. This is due to the simple reason that
they share a lot of common fields.
The CommonLeg
class contains the notional amount, a payment frequency, the effective date (as well as an adjustment
option), a termination date (and optional adjustment), the day count basis for day factor calculation, the payment delay
and calendar for the payment as well as the accrual adjustment options.
The FixedLeg
contains all the details for the CommonLeg
as well as payer details, the rate the leg is fixed at
and the date roll convention (i.e. what to do if the calculated date lands on a bank holiday or weekend).
The FloatingLeg
contains all the details for the CommonLeg and payer details, roll convention, the fixing roll
convention, which day of the month the reset is calculated, the frequency period of the fixing, the fixing calendar and
the details for the reference index (source and tenor).
The Calculation
class contains an expression (that can be evaluated via the ledger using variables provided and also
any members of the contract) and two schedules - a floatingLegPaymentSchedule
and a fixedLegPaymentSchedule
.
The fixed leg schedule is obviously pre-ordained, however, during the lifetime of the swap, the floating leg schedule is
regenerated upon each fixing being presented.
For this reason, there are two helper functions on the floating leg. Calculation.getFixing
returns the date of the
earliest unset fixing, and Calculation.applyFixing
returns a new Calculation object with the revised fixing in place.
Note that both schedules are, for consistency, indexed by payment dates, but the fixing is (due to the convention of
taking place two days previously) not going to be on that date.
注解
Payment events in the floatingLegPaymentSchedule
that start as a FloatingRatePaymentEvent
(which is a
representation of a payment for a rate that has not yet been finalised) are replaced in their entirety with an
equivalent FixedRatePaymentEvent
(which is the same type that is on the FixedLeg
).
Serialization¶
Object serialization¶
目录
Introduction¶
Object serialization is the process of converting objects into a stream of bytes and, deserialization, the reverse process of creating objects from a stream of bytes. It takes place every time nodes pass objects to each other as messages, when objects are sent to or from RPC clients from the node, and when we store transactions in the database.
Corda pervasively uses a custom form of type safe binary serialisation. This stands in contrast to some other systems that use weakly or untyped string-based serialisation schemes like JSON or XML. The primary drivers for this were:
A desire to have a schema describing what has been serialized alongside the actual data:
- To assist with versioning, both in terms of being able to interpret data archived long ago (e.g. trades from a decade ago, long after the code has changed) and between differing code versions.
- To make it easier to write generic code e.g. user interfaces that can navigate the serialized form of data.
- To support cross platform (non-JVM) interaction, where the format of a class file is not so easily interpreted.
A desire to use a documented and static wire format that is platform independent, and is not subject to change with 3rd party library upgrades, etc.
A desire to support open-ended polymorphism, where the number of subclasses of a superclass can expand over time and the subclasses do not need to be defined in the schema upfront. This is key to many Corda concepts, such as states.
Increased security by constructing deserialized objects through supported constructors, rather than having data inserted directly into their fields without an opportunity to validate consistency or intercept attempts to manipulate supposed invariants.
Binary formats work better with digital signatures than text based formats, as there’s much less scope for changes that modify syntax but not semantics.
Whitelisting¶
In classic Java serialization, any class on the JVM classpath can be deserialized. This is a source of exploits and vulnerabilities that exploit the large set of third-party libraries that are added to the classpath as part of a JVM application’s dependencies and carefully craft a malicious stream of bytes to be deserialized. In Corda, we strictly control which classes can be deserialized (and, pro-actively, serialized) by insisting that each (de)serializable class is part of a whitelist of allowed classes.
To add a class to the whitelist, you must use either of the following mechanisms:
- Add the
@CordaSerializable
annotation to the class. This annotation can be present on the class itself, on any super class of the class, on any interface implemented by the class or its super classes, or any interface extended by an interface implemented by the class or its super classes. - Implement the
SerializationWhitelist
interface and specify a list of whitelisted classes.
There is also a built-in Corda whitelist (see the DefaultWhitelist
class) that whitelists common JDK classes for
convenience. This whitelist is not user-editable.
The annotation is the preferred method for whitelisting. An example is shown in Using the client RPC API. It’s reproduced here as an example of both ways you can do this for a couple of example classes.
// Not annotated, so need to whitelist manually.
data class ExampleRPCValue(val foo: String)
// Annotated, so no need to whitelist manually.
@CordaSerializable
data class ExampleRPCValue2(val bar: Int)
class ExampleRPCSerializationWhitelist : SerializationWhitelist {
// Add classes like this.
override val whitelist = listOf(ExampleRPCValue::class.java)
}
注解
Several of the core interfaces at the heart of Corda are already annotated and so any classes that implement
them will automatically be whitelisted. This includes Contract
, ContractState
and CommandData
.
警告
Java 8 Lambda expressions are not serializable except in flow checkpoints, and then not by default. The syntax to declare a serializable Lambda
expression that will work with Corda is Runnable r = (Runnable & Serializable) () -> System.out.println("Hello World");
, or
Callable<String> c = (Callable<String> & Serializable) () -> "Hello World";
.
AMQP¶
Corda uses an extended form of AMQP 1.0 as its binary wire protocol.
Corda serialisation is currently used for:
- Peer-to-peer networking.
- Persisted messages, like signed transactions and states.
For the checkpointing of flows Corda uses a private scheme that is subject to change. It is currently based on the Kryo framework, but this may not be true in future.
This separation of serialization schemes into different contexts allows us to use the most suitable framework for that context rather than attempting to force a one-size-fits-all approach. Kryo is more suited to the serialization of a program’s stack frames, as it is more flexible than our AMQP framework in what it can construct and serialize. However, that flexibility makes it exceptionally difficult to make secure. Conversely, our AMQP framework allows us to concentrate on a secure framework that can be reasoned about and thus made safer, with far fewer security holes.
Selection of serialization context should, for the most part, be opaque to CorDapp developers, the Corda framework selecting the correct context as configured.
This document describes what is currently and what will be supported in the Corda AMQP format from the perspective of CorDapp developers, to allow CorDapps to take into consideration the future state. The AMQP serialization format will continue to apply the whitelisting functionality that is already in place and described in Object serialization.
Core Types¶
This section describes the classes and interfaces that the AMQP serialization format supports.
Collection Types¶
The following collection types are supported. Any implementation of the following will be mapped to an implementation of the interface or class on the other end. For example, if you use a Guava implementation of a collection, it will deserialize as the primitive collection type.
The declared types of properties should only use these types, and not any concrete implementation types (e.g. Guava implementations). Collections must specify their generic type, the generic type parameters will be included in the schema, and the element’s type will be checked against the generic parameters when deserialized.
java.util.Collection
java.util.List
java.util.Set
java.util.SortedSet
java.util.NavigableSet
java.util.NonEmptySet
java.util.Map
java.util.SortedMap
java.util.NavigableMap
However, as a convenience, we explicitly support the concrete implementation types below, and they can be used as the declared types of properties.
java.util.LinkedHashMap
java.util.TreeMap
java.util.EnumSet
java.util.EnumMap (but only if there is at least one entry)
JVM primitives¶
All the primitive types are supported.
boolean
byte
char
double
float
int
long
short
JDK Types¶
The following JDK library types are supported:
java.io.InputStream
java.lang.Boolean
java.lang.Byte
java.lang.Character
java.lang.Class
java.lang.Double
java.lang.Float
java.lang.Integer
java.lang.Long
java.lang.Short
java.lang.StackTraceElement
java.lang.String
java.lang.StringBuffer
java.math.BigDecimal
java.security.PublicKey
java.time.DayOfWeek
java.time.Duration
java.time.Instant
java.time.LocalDate
java.time.LocalDateTime
java.time.LocalTime
java.time.Month
java.time.MonthDay
java.time.OffsetDateTime
java.time.OffsetTime
java.time.Period
java.time.YearMonth
java.time.Year
java.time.ZonedDateTime
java.time.ZonedId
java.time.ZoneOffset
java.util.BitSet
java.util.Currency
java.util.UUID
Third-Party Types¶
The following 3rd-party types are supported:
kotlin.Unit
kotlin.Pair
org.apache.activemq.artemis.api.core.SimpleString
Corda Types¶
Any classes and interfaces in the Corda codebase annotated with @CordaSerializable
are supported.
All Corda exceptions that are expected to be serialized inherit from CordaThrowable
via either CordaException
(for
checked exceptions) or CordaRuntimeException
(for unchecked exceptions). Any Throwable
that is serialized but does
not conform to CordaThrowable
will be converted to a CordaRuntimeException
, with the original exception type
and other properties retained within it.
Custom Types¶
You own types must adhere to the following rules to be supported:
Classes¶
General Rules¶
The class must be compiled with parameter names included in the
.class
file. This is the default in Kotlin but must be turned on in Java using the-parameters
command line option tojavac
注解
In circumstances where classes cannot be recompiled, such as when using a third-party library, a proxy serializer can be used to avoid this problem. Details on creating such an object can be found on the Pluggable Serializers for CorDapps page.
The class must be annotated with
@CordaSerializable
The declared types of constructor arguments, getters, and setters must be supported, and where generics are used, the generic parameter must be a supported type, an open wildcard (
*
), or a bounded wildcard which is currently widened to an open wildcardAny superclass must adhere to the same rules, but can be abstract
Object graph cycles are not supported, so an object cannot refer to itself, directly or indirectly
Constructor Instantiation¶
The primary way Corda’s AMQP serialization framework instantiates objects is via a specified constructor. This is used to first determine which properties of an object are to be serialised, then, on deserialization, it is used to instantiate the object with the serialized values.
It is recommended that serializable objects in Corda adhere to the following rules, as they allow immutable state objects to be deserialised:
- A Java Bean getter for each of the properties in the constructor, with a name of the form
getX
. For example, for a constructor parameterfoo
, there must be a getter calledgetFoo()
. Iffoo
is a boolean, the getter may optionally be calledisFoo()
(this is why the class must be compiled with parameter names turned on)- A constructor which takes all of the properties that you wish to record in the serialized form. This is required in order for the serialization framework to reconstruct an instance of your class
- If more than one constructor is provided, the serialization framework needs to know which one to use. The
@ConstructorForDeserialization
annotation can be used to indicate which one. For a Kotlin class, without the@ConstructorForDeserialization
annotation, the primary constructor will be selected
In Kotlin, this maps cleanly to a data class where there getters are synthesized automatically. For example, suppose we have the following data class:
data class Example (val a: Int, val b: String)
Properties a
and b
will be included in the serialised form.
However, properties not mentioned in the constructor will not be serialised. For example, in the following code,
property c
will not be considered part of the serialised form:
data class Example (val a: Int, val b: String) {
var c: Int = 20
}
var e = Example (10, "hello")
e.c = 100;
val e2 = e.serialize().deserialize() // e2.c will be 20, not 100!!!
Setter Instantiation¶
As an alternative to constructor-based initialisation, Corda can also determine the important elements of an object by inspecting the getter and setter methods present on the class. If a class has only a default constructor and properties then the serializable properties will be determined by the presence of both a getter and setter for that property that are both publicly visible (i.e. the class adheres to the classic idiom of mutable JavaBeans).
On deserialization, a default instance will first be created, and then the setters will be invoked on that object to populate it with the correct values.
For example:
class Example(var a: Int, var b: Int, var c: Int)
class Example {
private int a;
private int b;
private int c;
public int getA() { return a; }
public int getB() { return b; }
public int getC() { return c; }
public void setA(int a) { this.a = a; }
public void setB(int b) { this.b = b; }
public void setC(int c) { this.c = c; }
}
警告
We do not recommend this pattern! Corda tries to use immutable data structures throughout, and if you rely heavily on mutable JavaBean style objects then you may sometimes find the API behaves in unintuitive ways.
Inaccessible Private Properties¶
Whilst the Corda AMQP serialization framework supports private object properties without publicly accessible getter methods, this development idiom is strongly discouraged.
For example.
class C(val a: Int, private val b: Int)
class C {
public Integer a;
private Integer b;
public C(Integer a, Integer b) {
this.a = a;
this.b = b;
}
}
When designing Corda states, it should be remembered that they are not, despite appearances, traditional OOP style objects. They are signed over, transformed, serialised, and relationally mapped. As such, all elements should be publicly accessible by design.
警告
IDEs will indicate erroneously that properties can be given something other than public visibility. Ignore this, as whilst it will work, as discussed above there are many reasons why this isn’t a good idea.
Providing a public getter, as per the following example, is acceptable:
class C(val a: Int, b: Int) {
var b: Int = b
private set
}
class C {
public Integer a;
private Integer b;
C(Integer a, Integer b) {
this.a = a;
this.b = b;
}
public Integer getB() {
return b;
}
}
Mismatched Class Properties / Constructor Parameters¶
Consider an example where you wish to ensure that a property of class whose type is some form of container is always sorted using some specific criteria yet you wish to maintain the immutability of the class.
This could be codified as follows:
@CordaSerializable
class ConfirmRequest(statesToConsume: List<StateRef>, val transactionId: SecureHash) {
companion object {
private val stateRefComparator = compareBy<StateRef>({ it.txhash }, { it.index })
}
private val states = statesToConsume.sortedWith(stateRefComparator)
}
The intention in the example is to always ensure that the states are stored in a specific order regardless of the ordering
of the list used to initialise instances of the class. This is achieved by using the first constructor parameter as the
basis for a private member. However, because that member is not mentioned in the constructor (whose parameters determine
what is serializable as discussed above) it would not be serialized. In addition, as there is no provided mechanism to retrieve
a value for statesToConsume
we would fail to build a serializer for this Class.
In this case a secondary constructor annotated with @ConstructorForDeserialization
would not be a valid solution as the
two signatures would be the same. Best practice is thus to provide a getter for the constructor parameter which explicitly
associates it with the actual member variable.
@CordaSerializable
class ConfirmRequest(statesToConsume: List<StateRef>, val transactionId: SecureHash) {
companion object {
private val stateRefComparator = compareBy<StateRef>({ it.txhash }, { it.index })
}
private val states = statesToConsume.sortedWith(stateRefComparator)
//Explicit "getter" for a property identified from the constructor parameters
fun getStatesToConsume() = states
}
Mutable Containers¶
Because Java fundamentally provides no mechanism by which the mutability of a class can be determined this presents a problem for the serialization framework. When reconstituting objects with container properties (lists, maps, etc) we must chose whether to create mutable or immutable objects. Given the restrictions, we have decided it is better to preserve the immutability of immutable objects rather than force mutability on presumed immutable objects.
注解
Whilst we could potentially infer mutability empirically, doing so exhaustively is impossible as it’s a design decision rather than something intrinsic to the JVM. At present, we defer to simply making things immutable on reconstruction with the following workarounds provided for those who use them. In future, this may change, but for now use the following examples as a guide.
For example, consider the following:
data class C(val l : MutableList<String>)
val bytes = C(mutableListOf ("a", "b", "c")).serialize()
val newC = bytes.deserialize()
newC.l.add("d")
The call to newC.l.add
will throw an UnsupportedOperationException
.
There are several workarounds that can be used to preserve mutability on reconstituted objects. Firstly, if the class isn’t a Kotlin data class and thus isn’t restricted by having to have a primary constructor.
class C {
val l : MutableList<String>
@Suppress("Unused")
constructor (l : MutableList<String>) {
this.l = l.toMutableList()
}
}
val bytes = C(mutableListOf ("a", "b", "c")).serialize()
val newC = bytes.deserialize()
// This time this call will succeed
newC.l.add("d")
Secondly, if the class is a Kotlin data class, a secondary constructor can be used.
data class C (val l : MutableList<String>){
@ConstructorForDeserialization
@Suppress("Unused")
constructor (l : Collection<String>) : this (l.toMutableList())
}
val bytes = C(mutableListOf ("a", "b", "c")).serialize()
val newC = bytes.deserialize()
// This will also work
newC.l.add("d")
Thirdly, to preserve immutability of objects (a recommend design principle - Copy on Write semantics) then mutating the contents of the class can be done by creating a new copy of the data class with the altered list passed (in this example) passed in as the Constructor parameter.
data class C(val l : List<String>)
val bytes = C(listOf ("a", "b", "c")).serialize()
val newC = bytes.deserialize()
val newC2 = newC.copy (l = (newC.l + "d"))
注解
If mutability isn’t an issue at all then in the case of data classes a single constructor can
be used by making the property var instead of val and in the init
block reassigning the property
to a mutable instance
Enums¶
All enums are supported, provided they are annotated with @CordaSerializable
. Corda supports interoperability of
enumerated type versions. This allows such types to be changed over time without breaking backward (or forward)
compatibility. The rules and mechanisms for doing this are discussed in Enum Evolution.
Exceptions¶
The following rules apply to supported Throwable
implementations.
- If you wish for your exception to be serializable and transported type safely it should inherit from either
CordaException
orCordaRuntimeException
- If not, the
Throwable
will deserialize to aCordaRuntimeException
with the details of the originalThrowable
contained within it, including the class name of the originalThrowable
Kotlin Objects¶
Kotlin’s non-anonymous object
s (i.e. constructs like object foo : Contract {...}
) are singletons and
treated differently. They are recorded into the stream with no properties, and deserialize back to the
singleton instance. Currently, the same is not true of Java singletons, which will deserialize to new instances
of the class. This is hard to fix because there’s no perfectly standard idiom for Java singletons.
Kotlin’s anonymous object
s (i.e. constructs like object : Contract {...}
) are not currently supported
and will not serialize correctly. They need to be re-written as an explicit class declaration.
Class synthesis¶
Corda serialization supports dynamically synthesising classes from the supplied schema when deserializing, without the supporting classes being present on the classpath. This can be useful where generic code might expect to be able to use reflection over the deserialized data, for scripting languages that run on the JVM, and also for ensuring classes not on the classpath can be deserialized without loading potentially malicious code.
If the original class implements some interfaces then the carpenter will make sure that all of the interface methods are
backed by fields. If that’s not the case then an exception will be thrown during deserialization. This check can
be turned off with SerializationContext.withLenientCarpenter
. This can be useful if only the field getters are needed,
say in an object viewer.
Calculated values¶
In some cases, for example the exitKeys field in FungibleState
, a property in an interface may normally be implemented
as a calculated value, with a “getter” method for reading it but neither a corresponding constructor parameter nor a
“setter” method for writing it. In this case, it will not automatically be included among the properties to be serialized,
since the receiving class would ordinarily be able to re-calculate it on demand. However, a synthesized class will not
have the method implementation which knows how to calculate the value, and a cast to the interface will fail because the
property is not serialized and so the “getter” method present in the interface will not be synthesized.
The solution is to annotate the method with the SerializableCalculatedProperty
annotation, which will cause the value
exposed by the method to be read and transmitted during serialization, but discarded during normal deserialization. The
synthesized class will then include a backing field together with a “getter” for the serialized calculated value, and will
remain compatible with the interface.
If the annotation is added to the method in the interface, then all implementing classes must calculate the value and
none may have a corresponding backing field; alternatively, it can be added to the overriding method on each implementing
class where the value is calculated and there is no backing field. If the field is a Kotlin val
, then the annotation
should be targeted at its getter method, e.g. @get:SerializableCalculatedProperty
.
Future enhancements¶
Possible future enhancements include:
- Java singleton support. We will add support for identifying classes which are singletons and identifying the static method responsible for returning the singleton instance
- Instance internalizing support. We will add support for identifying classes that should be resolved against an instances map to avoid creating many duplicate instances that are equal (similar to
String.intern()
)
Type Evolution¶
Type evolution is the mechanism by which classes can be altered over time yet still remain serializable and deserializable across all versions of the class. This ensures an object serialized with an older idea of what the class “looked like” can be deserialized and a version of the current state of the class instantiated.
More detail can be found in Default Class Evolution.
Pluggable Serializers for CorDapps¶
目录
To be serializable by Corda Java classes must be compiled with the -parameters switch to enable matching of its properties to constructor parameters. This is important because Corda’s internal AMQP serialization scheme will only construct objects using their constructors. However, when recompilation isn’t possible, or classes are built in such a way that they cannot be easily modified for simple serialization, CorDapps can provide custom proxy serializers that Corda can use to move from types it cannot serialize to an interim representation that it can with the transformation to and from this proxy object being handled by the supplied serializer.
Serializer Location¶
Custom serializer classes should follow the rules for including classes found in Building and installing a CorDapp
Writing a Custom Serializer¶
Serializers must
- Inherit from
net.corda.core.serialization.SerializationCustomSerializer
- Provide a proxy class to transform the object to and from
- Implement the
toProxy
andfromProxy
methods- Be either included into the CorDapp Jar or made known to the running process via the
amqp.custom.serialization.scanSpec
system property. This system property may be necessary to be able to discover custom serializer in the classpath. At a minimum the value of the property should include comma separated set of packages where custom serializers located. Full syntax includes scanning specification as defined by: <http://github.com/lukehutch/fast-classpath-scanner/wiki/2.-Constructor#scan-spec>
Serializers inheriting from SerializationCustomSerializer
have to implement two methods and two types.
Example¶
Consider the following class:
public final class Example {
private final Int a
private final Int b
// Because this is marked private the serialization framework will not
// consider it when looking to see which constructor should be used
// when serializing instances of this class.
private Example(Int a, Int b) {
this.a = a;
this.b = b;
}
public static Example of (int[] a) { return Example(a[0], a[1]); }
public int getA() { return a; }
public int getB() { return b; }
}
Without a custom serializer we cannot serialize this class as there is no public constructor that facilitates the initialisation of all of its properties.
注解
This is clearly a contrived example, simply making the constructor public would alleviate the issues. However, for the purposes of this example we are assuming that for external reasons this cannot be done.
To be serializable by Corda this would require a custom serializer to be written that can transform the unserializable class into a form we can serialize. Continuing the above example, this could be written as follows:
/**
* The class lacks a public constructor that takes parameters it can associate
* with its properties and is thus not serializable by the CORDA serialization
* framework.
*/
class Example {
private int a;
private int b;
public int getA() { return a; }
public int getB() { return b; }
public Example(List<int> l) {
this.a = l.get(0);
this.b = l.get(1);
}
}
/**
* This is the class that will Proxy instances of Example within the serializer
*/
public class ExampleProxy {
/**
* These properties will be serialized into the byte stream, this is where we choose how to
* represent instances of the object we're proxying. In this example, which is somewhat
* contrived, this choice is obvious. In your own classes / 3rd party libraries, however, this
* may require more thought.
*/
private int proxiedA;
private int proxiedB;
/**
* The proxy class itself must be serializable by the framework, it must thus have a constructor that
* can be mapped to the properties of the class via getter methods.
*/
public int getProxiedA() { return proxiedA; }
public int getProxiedB() { return proxiedB; }
public ExampleProxy(int proxiedA, int proxiedB) {
this.proxiedA = proxiedA;
this.proxiedB = proxiedB;
}
}
/**
* Finally this is the custom serializer that will automatically loaded into the serialization
* framework when the CorDapp Jar is scanned at runtime.
*/
public class ExampleSerializer implements SerializationCustomSerializer<Example, ExampleProxy> {
/**
* Given an instance of the Example class, create an instance of the proxying object ExampleProxy.
*
* Essentially convert Example -> ExampleProxy
*/
public ExampleProxy toProxy(Example obj) {
return new ExampleProxy(obj.getA(), obj.getB());
}
/**
* Conversely, given an instance of the proxy object, revert that back to an instance of the
* type being proxied.
*
* Essentially convert ExampleProxy -> Example
*/
public Example fromProxy(ExampleProxy proxy) {
List<int> l = new ArrayList<int>(2);
l.add(proxy.getProxiedA());
l.add(proxy.getProxiedB());
return new Example(l);
}
}
class ExampleSerializer : SerializationCustomSerializer<Example, ExampleSerializer.Proxy> {
/**
* This is the actual proxy class that is used as an intermediate representation
* of the Example class
*/
data class Proxy(val a: Int, val b: Int)
/**
* This method should be able to take an instance of the type being proxied and
* transpose it into that form, instantiating an instance of the Proxy object (it
* is this class instance that will be serialized into the byte stream.
*/
override fun toProxy(obj: Example) = Proxy(obj.a, obj.b)
/**
* This method is used during deserialization. The bytes will have been read
* from the serialized blob and an instance of the Proxy class returned, we must
* now be able to transform that back into an instance of our original class.
*
* In our example this requires us to evoke the static "of" method on the
* Example class, transforming the serialized properties of the Proxy instance
* into a form expected by the construction method of Example.
*/
override fun fromProxy(proxy: Proxy) : Example {
val constructorArg = IntArray(2);
constructorArg[0] = proxy.a
constructorArg[1] = proxy.b
return Example.of(constructorArg)
}
}
In the above examples
ExampleSerializer
is the actual serializer that will be loaded by the framework to serialize instances of theExample
type.ExampleSerializer.Proxy
, in the Kotlin example, andExampleProxy
in the Java example, is the intermediate representation used by the framework to represent instances ofExample
within the wire format.
The Proxy Object¶
The proxy object should be thought of as an intermediate representation that the serialization framework can reason about. One is being written for a class because, for some reason, that class cannot be introspected successfully but that framework. It is therefore important to note that the proxy class must only contain elements that the framework can reason about.
The proxy class itself is distinct from the proxy serializer. The serializer must refer to the unserializable
type in the toProxy
and fromProxy
methods.
For example, the first thought a developer may have when implementing a proxy class is to simply wrap an instance of the object being proxied. This is shown below
class ExampleSerializer : SerializationCustomSerializer<Example, ExampleSerializer.Proxy> {
/**
* In this example, we are trying to wrap the Example type to make it serializable
*/
data class Proxy(val e: Example)
override fun toProxy(obj: Example) = Proxy(obj)
override fun fromProxy(proxy: Proxy) : Example {
return proxy.e
}
}
However, this will not work because what we’ve created is a recursive loop whereby synthesising a serializer
for the Example
type requires synthesising one for ExampleSerializer.Proxy
. However, that requires
one for Example
and so on and so forth until we get a StackOverflowException
.
The solution, as shown initially, is to create the intermediate form (the Proxy object) purely in terms the serialization framework can reason about.
重要
When composing a proxy object for a class be aware that everything within that structure will be written into the serialized byte stream.
Whitelisting¶
By writing a custom serializer for a class it has the effect of adding that class to the whitelist, meaning such classes don’t need explicitly adding to the CorDapp’s whitelist.
Default Class Evolution¶
目录
Whilst more complex evolutionary modifications to classes require annotating, Corda’s serialization framework supports several minor modifications to classes without any external modification save the actual code changes. These are:
- Adding nullable properties
- Adding non nullable properties if, and only if, an annotated constructor is provided
- Removing properties
- Reordering constructor parameters
Adding Nullable Properties¶
The serialization framework allows nullable properties to be freely added. For example:
// Initial instance of the class
data class Example1 (val a: Int, b: String) // (Version A)
// Class post addition of property c
data class Example1 (val a: Int, b: String, c: Int?) // (Version B)
A node with version A of class Example1
will be able to deserialize a blob serialized by a node with it
at version B as the framework would treat it as a removed property.
A node with the class at version B will be able to deserialize a serialized version A of Example1
without
any modification as the property is nullable and will thus provide null to the constructor.
Adding Non Nullable Properties¶
If a non null property is added, unlike nullable properties, some additional code is required for this to work. Consider a similar example to our nullable example above
// Initial instance of the class
data class Example2 (val a: Int, b: String) // (Version A)
// Class post addition of property c
data class Example1 (val a: Int, b: String, c: Int) { // (Version B)
@DeprecatedConstructorForDeserialization(1)
constructor (a: Int, b: String) : this(a, b, 0) // 0 has been determined as a sensible default
}
For this to work we have had to add a new constructor that allows nodes with the class at version B to create an instance from serialised form of that class from an older version, in this case version A as per our example above. A sensible default for the missing value is provided for instantiation of the non null property.
注解
The @DeprecatedConstructorForDeserialization
annotation is important, this signifies to the
serialization framework that this constructor should be considered for building instances of the
object when evolution is required.
Furthermore, the integer parameter passed to the constructor if the annotation indicates a precedence order, see the discussion below.
As before, instances of the class at version A will be able to deserialize serialized forms of example B as it will simply treat them as if the property has been removed (as from its perspective, they will have been).
Constructor Versioning¶
If, over time, multiple non nullable properties are added, then a class will potentially have to be able to deserialize a number of different forms of the class. Being able to select the correct constructor is important to ensure the maximum information is extracted.
Consider this example:
// The original version of the class
data class Example3 (val a: Int, val b: Int)
// The first alteration, property c added
data class Example3 (val a: Int, val b: Int, val c: Int)
// The second alteration, property d added
data class Example3 (val a: Int, val b: Int, val c: Int, val d: Int)
// The third alteration, and how it currently exists, property e added
data class Example3 (val a: Int, val b: Int, val c: Int, val d: Int, val: Int e) {
// NOTE: version number purposefully omitted from annotation for demonstration purposes
@DeprecatedConstructorForDeserialization
constructor (a: Int, b: Int) : this(a, b, -1, -1, -1) // alt constructor 1
@DeprecatedConstructorForDeserialization
constructor (a: Int, b: Int, c: Int) : this(a, b, c, -1, -1) // alt constructor 2
@DeprecatedConstructorForDeserialization
constructor (a: Int, b: Int, c: Int, d) : this(a, b, c, d, -1) // alt constructor 3
}
In this case, the deserializer has to be able to deserialize instances of class Example3
that were serialized as, for example:
Example3 (1, 2) // example I
Example3 (1, 2, 3) // example II
Example3 (1, 2, 3, 4) // example III
Example3 (1, 2, 3, 4, 5) // example IV
Examples I, II, and III would require evolution and thus selection of constructor. Now, with no versioning applied there is ambiguity as to which constructor should be used. For example, example II could use ‘alt constructor 2’ which matches it’s arguments most tightly or ‘alt constructor 1’ and not instantiate parameter c.
constructor (a: Int, b: Int, c: Int) : this(a, b, c, -1, -1)
or
constructor (a: Int, b: Int) : this(a, b, -1, -1, -1)
Whilst it may seem trivial which should be picked, it is still ambiguous, thus we use a versioning number in the constructor annotation which gives a strict precedence order to constructor selection. Therefore, the proper form of the example would be:
// The third alteration, and how it currently exists, property e added
data class Example3 (val a: Int, val b: Int, val c: Int, val d: Int, val: Int e) {
@DeprecatedConstructorForDeserialization(1)
constructor (a: Int, b: Int) : this(a, b, -1, -1, -1) // alt constructor 1
@DeprecatedConstructorForDeserialization(2)
constructor (a: Int, b: Int, c: Int) : this(a, b, c, -1, -1) // alt constructor 2
@DeprecatedConstructorForDeserialization(3)
constructor (a: Int, b: Int, c: Int, d) : this(a, b, c, d, -1) // alt constructor 3
}
Constructors are selected in strict descending order taking the one that enables construction. So, deserializing examples I to IV would give us:
Example3 (1, 2, -1, -1, -1) // example I
Example3 (1, 2, 3, -1, -1) // example II
Example3 (1, 2, 3, 4, -1) // example III
Example3 (1, 2, 3, 4, 5) // example IV
Removing Properties¶
Property removal is effectively a mirror of adding properties (both nullable and non nullable) given that this functionality is required to facilitate the addition of properties. When this state is detected by the serialization framework, properties that don’t have matching parameters in the main constructor are simply omitted from object construction.
// Initial instance of the class
data class Example4 (val a: Int?, val b: String?, val c: Int?) // (Version A)
// Class post removal of property 'a'
data class Example4 (val b: String?, c: Int?) // (Version B)
In practice, what this means is removing nullable properties is possible. However, removing non nullable properties isn’t because a node receiving a message containing a serialized form of an object with fewer properties than it requires for construction has no capacity to guess at what values should or could be used as sensible defaults. When those properties are nullable it simply sets them to null.
Reordering Constructor Parameter Order¶
Properties (in Kotlin this corresponds to constructor parameters) may be reordered freely. The evolution serializer will create a mapping between how a class was serialized and its current constructor parameter order. This is important to our AMQP framework as it constructs objects using their primary (or annotated) constructor. The ordering of whose parameters will have determined the way an object’s properties were serialised into the byte stream.
For an illustrative example consider a simple class:
data class Example5 (val a: Int, val b: String)
val e = Example5(999, "hello")
When we serialize e
its properties will be encoded in order of its primary constructors parameters, so:
999,hello
Were those parameters to be reordered post serialisation then deserializing, without evolution, would fail with a basic
type error as we’d attempt to create the new value of Example5
with the values provided in the wrong order:
// changed post serialisation
data class Example5 (val b: String, val a: Int)
| 999 | hello | <--- Extract properties to pass to constructor from byte stream
| |
| +--------------------------+
+--------------------------+ |
| |
deserializedValue = Example5(999, "hello") <--- Resulting attempt at construction
| |
| \
| \ <--- Will clearly fail as 999 is not a
| \ string and hello is not an integer
data class Example5 (val b: String, val a: Int)
Enum Evolution¶
目录
In the continued development of a CorDapp an enumerated type that was fit for purpose at one time may require changing. Normally, this would be problematic as anything serialised (and kept in a vault) would run the risk of being unable to be deserialized in the future or older versions of the app still alive within a compatibility zone may fail to deserialize a message.
To facilitate backward and forward support for alterations to enumerated types Corda’s serialization framework supports the evolution of such types through a well defined framework that allows different versions to interoperate with serialised versions of an enumeration of differing versions.
This is achieved through the use of certain annotations. Whenever a change is made, an annotation capturing the change must be added (whilst it can be omitted any interoperability will be lost). Corda supports two modifications to enumerated types, adding new constants, and renaming existing constants
警告
Once added evolution annotations MUST NEVER be removed from a class, doing so will break both forward and backward compatibility for this version of the class and any version moving forward
The Purpose of Annotating Changes¶
The biggest hurdle to allowing enum constants to be changed is that there will exist instances of those classes, either serialized in a vault or on nodes with the old, unmodified, version of the class that we must be able to interoperate with. Thus if a received data structure references an enum assigned a constant value that doesn’t exist on the running JVM, a solution is needed.
For this, we use the annotations to allow developers to express their backward compatible intentions.
In the case of renaming constants this is somewhat obvious, the deserializing node will simply treat any constants it doesn’t understand as their “old” values, i.e. those values that it currently knows about.
In the case of adding new constants the developer must chose which constant (that existed before adding the new one) a deserializing system should treat any instances of the new one as.
注解
Ultimately, this may mean some design compromises are required. If an enumeration is planned as being often extended and no sensible defaults will exist then including a constant in the original version of the class that all new additions can default to may make sense
Evolution Transmission¶
An object serializer, on creation, will inspect the class it represents for any evolution annotations. If a class is thus decorated those rules will be encoded as part of any serialized representation of a data structure containing that class. This ensures that on deserialization the deserializing object will have access to any transformative rules it needs to build a local instance of the serialized object.
Evolution Precedence¶
On deserialization (technically on construction of a serialization object that facilitates serialization and deserialization) a class’s fingerprint is compared to the fingerprint received as part of the AMQP header of the corresponding class. If they match then we are sure that the two class versions are functionally the same and no further steps are required save the deserialization of the serialized information into an instance of the class.
If, however, the fingerprints differ then we know that the class we are attempting to deserialize is different than the version we will be deserializing it into. What we cannot know is which version is newer, at least not by examining the fingerprint
注解
Corda’s AMQP fingerprinting for enumerated types include the type name and the enum constants
Newer vs older is important as the deserializer needs to use the more recent set of transforms to ensure it can transform the serialised object into the form as it exists in the deserializer. Newness is determined simply by length of the list of all transforms. This is sufficient as transform annotations should only ever be added
警告
technically there is nothing to prevent annotations being removed in newer versions. However, this will break backward compatibility and should thus be avoided unless a rigorous upgrade procedure is in place to cope with all deployed instances of the class and all serialised versions existing within vaults.
Thus, on deserialization, there will be two options to chose from in terms of transformation rules
- Determined from the local class and the annotations applied to it (the local copy)
- Parsed from the AMQP header (the remote copy)
Which set is used will simply be the largest.
Renaming Constants¶
Renamed constants are marked as such with the @CordaSerializationTransformRenames
meta annotation that
wraps a list of @CordaSerializationTransformRename
annotations. Each rename requiring an instance in the
list.
Each instance must provide the new name of the constant as well as the old. For example, consider the following enumeration:
enum class Example {
A, B, C
}
If we were to rename constant C to D this would be done as follows:
@CordaSerializationTransformRenames (
CordaSerializationTransformRename("D", "C")
)
enum class Example {
A, B, D
}
注解
The parameters to the CordaSerializationTransformRename
annotation are defined as ‘to’ and ‘from,
so in the above example it can be read as constant D (given that is how the class now exists) was renamed
from C
In the case where a single rename has been applied the meta annotation may be omitted. Thus, the following is functionally identical to the above:
@CordaSerializationTransformRename("D", "C")
enum class Example {
A, B, D
}
However, as soon as a second rename is made the meta annotation must be used. For example, if at some time later B is renamed to E:
@CordaSerializationTransformRenames (
CordaSerializationTransformRename(from = "B", to = "E"),
CordaSerializationTransformRename(from = "C", to = "D")
)
enum class Example {
A, E, D
}
Rules¶
- A constant cannot be renamed to match an existing constant, this is enforced through language constraints
- A constant cannot be renamed to a value that matches any previous name of any other constant
If either of these covenants are inadvertently broken, a NotSerializableException
will be thrown on detection
by the serialization engine as soon as they are detected. Normally this will be the first time an object doing
so is serialized. However, in some circumstances, it could be at the point of deserialization.
Adding Constants¶
Enumeration constants can be added with the @CordaSerializationTransformEnumDefaults
meta annotation that
wraps a list of CordaSerializationTransformEnumDefault
annotations. For each constant added an annotation
must be included that signifies, on deserialization, which constant value should be used in place of the
serialised property if that value doesn’t exist on the version of the class as it exists on the deserializing
node.
enum class Example {
A, B, C
}
If we were to add the constant D
@CordaSerializationTransformEnumDefaults (
CordaSerializationTransformEnumDefault("D", "C")
)
enum class Example {
A, B, C, D
}
注解
The parameters to the CordaSerializationTransformEnumDefault
annotation are defined as ‘new’ and ‘old’,
so in the above example it can be read as constant D should be treated as constant C if you, the deserializing
node, don’t know anything about constant D
注解
Just as with the CordaSerializationTransformRename
transformation if a single transform is being applied
then the meta transform may be omitted.
@CordaSerializationTransformEnumDefault("D", "C")
enum class Example {
A, B, C, D
}
New constants may default to any other constant older than them, including constants that have also been added since inception. In this example, having added D (above) we add the constant E and chose to default it to D
@CordaSerializationTransformEnumDefaults (
CordaSerializationTransformEnumDefault("E", "D"),
CordaSerializationTransformEnumDefault("D", "C")
)
enum class Example {
A, B, C, D, E
}
注解
Alternatively, we could have decided both new constants should have been defaulted to the first element
@CordaSerializationTransformEnumDefaults (
CordaSerializationTransformEnumDefault("E", "A"),
CordaSerializationTransformEnumDefault("D", "A")
)
enum class Example {
A, B, C, D, E
}
When deserializing the most applicable transform will be applied. Continuing the above example, deserializing nodes could have three distinct views on what the enum Example looks like (annotations omitted for brevity)
// The original version of the class. Will deserialize: -
// A -> A
// B -> B
// C -> C
// D -> C
// E -> C
enum class Example {
A, B, C
}
// The class as it existed after the first addition. Will deserialize:
// A -> A
// B -> B
// C -> C
// D -> D
// E -> D
enum class Example {
A, B, C, D
}
// The current state of the class. All values will deserialize as themselves
enum class Example {
A, B, C, D, E
}
Thus, when deserializing a value that has been encoded as E could be set to one of three constants (E, D, and C) depending on how the deserializing node understands the class.
Rules¶
- New constants must be added to the end of the existing list of constants
- Defaults can only be set to “older” constants, i.e. those to the left of the new constant in the list
- Constants must never be removed once added
- New constants can be renamed at a later date using the appropriate annotation
- When renamed, if a defaulting annotation refers to the old name, it should be left as is
Combining Evolutions¶
Renaming constants and adding constants can be combined over time as a class changes freely. Added constants can in turn be renamed and everything will continue to be deserializeable. For example, consider the following enum:
enum class OngoingExample { A, B, C }
For the first evolution, two constants are added, D and E, both of which are set to default to C when not present
@CordaSerializationTransformEnumDefaults (
CordaSerializationTransformEnumDefault("E", "C"),
CordaSerializationTransformEnumDefault("D", "C")
)
enum class OngoingExample { A, B, C, D, E }
Then lets assume constant C is renamed to CAT
@CordaSerializationTransformEnumDefaults (
CordaSerializationTransformEnumDefault("E", "C"),
CordaSerializationTransformEnumDefault("D", "C")
)
@CordaSerializationTransformRename("C", "CAT")
enum class OngoingExample { A, B, CAT, D, E }
Note how the first set of modifications still reference C, not CAT. This is as it should be and will continue to work as expected.
Subsequently is is fine to add an additional new constant that references the renamed value.
@CordaSerializationTransformEnumDefaults (
CordaSerializationTransformEnumDefault("F", "CAT"),
CordaSerializationTransformEnumDefault("E", "C"),
CordaSerializationTransformEnumDefault("D", "C")
)
@CordaSerializationTransformRename("C", "CAT")
enum class OngoingExample { A, B, CAT, D, E, F }
Unsupported Evolutions¶
The following evolutions are not currently supports
- Removing constants
- Reordering constants
Blob 查看器¶
There are many benefits to having a custom binary serialisation format (see Object serialization for details) but one
disadvantage is the inability to view the contents in a human-friendly manner. The Corda Blob Inspector tool alleviates
this issue by allowing the contents of a binary blob file (or URL end-point) to be output in either YAML or JSON. It
uses JacksonSupport
to do this (see JSON).
对于自定义的二进制序列化格式(查看 Object serialization 更多详细信息)是有很多好处的,但是一个不好的地方是没办法用自然人的方式来查看里边的内容。Blob 查看器工具通过允许将一个二进制 blob 文件(或者一个 URL end-point)的内容输出为 YAML 或者 JSON 的方式解决了这个问题。它使用 JacksonSupport
实现了这个功能(查看 JSON)。
The tool can be downloaded from here.
该工具的最新版本可以从 这里 下载。
To run simply pass in the file or URL as the first parameter:
想要运行它,只需要简单地将文件或者 URL 作为第一个参数传给它:
java -jar blob-inspector.jar <file or URL>
Use the --help
flag for a full list of command line options.
使用 --help
标记查看关于这个命令的所有选项。
When inspecting your custom data structures, there’s no need to include the jars containing the class definitions for them in the classpath. The blob inspector (or rather the serialization framework) is able to synthesize any classes found in the blob that aren’t on the classpath.
当查看你自定义的数据结构的时候,不需要在 classpath 中包含这些类定义的 jars。Blob 查看器(或者更像是一个序列化框架) 能够将任何没有在 classpath 上的 blob 中的类分析出来。
支持的格式¶
The inspector can read input data in three formats: raw binary, hex encoded text and base64 encoded text. For instance if you have retrieved your binary data and it looks like this:
这个查看器能够以三种格式阅读 input data,hex 编码文字和 base64 编码文字。比如你获取回来的二进制数据像下边这样:
636f7264610100000080c562000000000001d0000030720000000300a3226e65742e636f7264613a38674f537471464b414a5055...
then you have hex encoded data. If it looks like this it’s base64 encoded:
接着你有 hex 编码的数据。如果它看起来像这样的话,那应该是 base64 编码:
Y29yZGEBAAAAgMViAAAAAAAB0AAAMHIAAAADAKMibmV0LmNvcmRhOjhnT1N0cUZLQUpQVWVvY2Z2M1NlU1E9PdAAACc1AAAAAgCjIm5l...
And if it looks like something vomited over your screen it’s raw binary. You don’t normally need to care about these differences because the tool will try every format until it works.
通常你不需要关心这些不同的格式,因为这个工具会尝试每种格式,直到成功。
Something that’s useful to know about Corda’s format is that it always starts with the word “corda” in binary. Try hex decoding 636f726461 using the online hex decoder tool here to see for yourself.
关于 Corda 的格式很有用的需要了解的东西是在二进制中,它总是会以单词 “corda” 来开头。试着使用 在线 hex 解码工具 来解码 hex 636f726461。
Output data can be in either a slightly extended form of YaML or JSON. YaML (Yet another markup language) is a bit easier to read for humans and is the default. JSON can of course be parsed by any JSON library in any language.
输出的数据 可以是一个稍微扩展自 YaML 或者 JSON 形式。YaML(Yet another markup language)更容易被人类阅读并且是默认的。JSON 当然也可以被任何的语言中的 JSON 类库来解析。
注解
One thing to note is that the binary blob may contain embedded SerializedBytes
objects. Rather than printing these
out as a Base64 string, the blob inspector will first materialise them into Java objects and then output those. You will
see this when dealing with classes such as SignedData
or other structures that attach a signature, such as the
nodeInfo-*
files or the network-parameters
file in the node’s directory.
注解
一点需要注意的是二进制 blob 可能会包含内置的 SerializedBytes
对象。Blob 查看器不会将他们输出为 Base64 的字符串,而是将他们转换到 Java 对象中,然后再输出。当你在处理像 SignedData
或者其他的带有签名的结构,比如节点路径下的 nodeInfo-*
文件或者 network-parameters
文件的时候你会看到它。
例子¶
Here’s what a node-info file from the node’s data directory may look like:
下边是来自于节点的数据路径的一个 node-info 文件:
- YAML:
net.corda.nodeapi.internal.SignedNodeInfo
---
raw:
class: "net.corda.core.node.NodeInfo"
deserialized:
addresses:
- "localhost:10005"
legalIdentitiesAndCerts:
- "O=BankOfCorda, L=London, C=GB"
platformVersion: 4
serial: 1527851068715
signatures:
- !!binary |-
VFRy4frbgRDbCpK1Vo88PyUoj01vbRnMR3ROR2abTFk7yJ14901aeScX/CiEP+CDGiMRsdw01cXt\nhKSobAY7Dw==
- JSON:
net.corda.nodeapi.internal.SignedNodeInfo
{
"raw" : {
"class" : "net.corda.core.node.NodeInfo",
"deserialized" : {
"addresses" : [ "localhost:10005" ],
"legalIdentitiesAndCerts" : [ "O=BankOfCorda, L=London, C=GB" ],
"platformVersion" : 4,
"serial" : 1527851068715
}
},
"signatures" : [ "VFRy4frbgRDbCpK1Vo88PyUoj01vbRnMR3ROR2abTFk7yJ14901aeScX/CiEP+CDGiMRsdw01cXthKSobAY7Dw==" ]
}
Notice the file is actually a serialised SignedNodeInfo
object, which has a raw
property of type SerializedBytes<NodeInfo>
.
This property is materialised into a NodeInfo
and is output under the deserialized
field.
我们会注意到这个文件其实是一个被序列化的 SignedNodeInfo
对象,它含有一个类型为 SerializedBytes<NodeInfo>
的 raw
属性。这个属性被转换成了 NodeInfo
并且被输出到 deserialized
字段的下边。
命令行选项¶
The blob inspector can be started with the following command-line options:
blob 查看器可以带有下边的命令行选项来被启动:
blob-inspector [-hvV] [--full-parties] [--schema] [--format=type]
[--input-format=type] [--logging-level=<loggingLevel>] SOURCE
[COMMAND]
--format=type
: Output format. Possible values: [YAML, JSON]. Default: YAML.--input-format=type
: Input format. If the file can’t be decoded with the given value it’s auto-detected, so you should never normally need to specify this. Possible values [BINARY, HEX, BASE64]. Default: BINARY.--full-parties
: Display the owningKey and certPath properties of Party and PartyAndReference objects respectively.--schema
: Print the blob’s schema first.--verbose
,--log-to-console
,-v
: If set, prints logging to the console as well as to a file.--logging-level=<loggingLevel>
: Enable logging at this level and higher. Possible values: ERROR, WARN, INFO, DEBUG, TRACE. Default: INFO.--help
,-h
: Show this help message and exit.--version
,-V
: Print version information and exit.
子命令¶
install-shell-extensions
: Install blob-inspector
alias and auto completion for bash and zsh. See Shell extensions for CLI Applications for more info.
JSON¶
Corda provides a module that extends the popular Jackson serialisation engine. Jackson is often used to serialise to and from JSON, but also supports other formats such as YaML and XML. Jackson is itself very modular and has a variety of plugins that extend its functionality. You can learn more at the Jackson home page.
Corda 提供了一个扩展自流行的 Jackson 序列化引擎的模块。Jackson 通常被用来序列化为 JSON 或者从 JSON 的序列化,当时也支持其他格式,比如 YaML 和 XML。Jackson 自身非常模块化并且具有很多扩展功能的插件。你可以在 Jackson 首页 学习更多内容。
To gain support for JSON serialisation of common Corda data types, include a dependency on net.corda:jackson:XXX
in your Gradle or Maven build file, where XXX is of course the Corda version you are targeting (0.9 for M9, for instance).
Then you can obtain a Jackson ObjectMapper
instance configured for use using the JacksonSupport.createNonRpcMapper()
method. There are variants of this method for obtaining Jackson’s configured in other ways: if you have an RPC
connection to the node (see “与节点互动”) then your JSON mapper can resolve identities found in objects.
为了能够得到常规 Corda 数据类型的 JSON 序列化的支持,包括在你的 Gradle 或者 Maven build 文件中的 net.corda:jackson:XXX
上的一个依赖,XXX 是你的目标的 Corda 版本(比如 0.9 代表 M9)。然后你可以使用 JacksonSupport.createNonRpcMapper()
方法来获取一个配置好的 Jackson ObjectMapper
实例来使用。这个方法有很多的参数通过其他方式来得到 Jackson 的配置:如果你跟节点有一个 RPC 链接(查看 与节点互动),那么你的 JSON mapper 能够处理在对象中找到的 identities。
The API is described in detail here:
这个 API 在下边有详细的描述:
import net.corda.jackson.JacksonSupport
val mapper = JacksonSupport.createNonRpcMapper()
val json = mapper.writeValueAsString(myCordaState) // myCordaState can be any object.
import net.corda.jackson.JacksonSupport
ObjectMapper mapper = JacksonSupport.createNonRpcMapper()
String json = mapper.writeValueAsString(myCordaState) // myCordaState can be any object.
注解
The way mappers interact with identity and RPC is likely to change in a future release.
注解
mapper 和 identity 和 RPC 进行互动的方式可能在将来的 release 中被改变。
Troubleshooting¶
Please report any issues on our StackOverflow page: https://stackoverflow.com/questions/tagged/corda.
节点¶
节点文件夹结构¶
A folder containing a Corda node files has the following structure:
一个包含 Corda 节点文件的文件夹具有以下的结构:
.
├── additional-node-infos // Additional node infos to load into the network map cache, beyond what the network map server provides
├── artemis // Stores buffered P2P messages
├── brokers // Stores buffered RPC messages
├── certificates // The node's certificates
├── corda-webserver.jar // The built-in node webserver (DEPRECATED)
├── corda.jar // The core Corda libraries (This is the actual Corda node implementation)
├── cordapps // The CorDapp JARs installed on the node
├── drivers // Contains a Jolokia driver used to export JMX metrics, the node loads any additional JAR files from this directory at startup.
├── logs // The node's logs
├── network-parameters // The network parameters automatically downloaded from the network map server
├── node.conf // The node's configuration files
├── persistence.mv.db // The node's database
└── shell-commands // Custom shell commands defined by the node owner
You install CorDapps on the node by placing CorDapp JARs in the cordapps
folder.
通过将 CorDap JARs 放到 cordapps
文件夹的方式你可以在节点上安装 CorDapps。
In development mode (i.e. when devMode = true
), the certificates
directory is filled with pre-configured
keystores if they do not already exist to ensure that developers can get the nodes working as quickly as
possible.
在开发模式中(当 devMode = true
),如果需要的 keystores 不存在的话,certificates
路径里会被放进一个预先配置好的 keystores。这个确保了开发者能够尽快地让节点工作起来。
警告
These pre-configured keystores are not secure and must not used in a production environments.
警告
这些预先配置好的 keystores 并不是安全的,所以不应该被用在生产环境。
The keystores store the key pairs and certificates under the following aliases:
keystores 存储了秘钥对儿以及证书:
nodekeystore.jks
uses the aliasescordaclientca
andidentity-private-key
sslkeystore.jks
uses the aliascordaclienttls
All the keystores use the password provided in the node’s configuration file using the keyStorePassword
attribute.
If no password is configured, it defaults to cordacadevpass
.
所有的 keystores 使用的都是由节点的配置文件中使用 keyStorePassword
属性所提供的密码。
To learn more, see Network certificates.
查看 Network certificates 了解更多。
节点的命名¶
A node’s name must be a valid X.500 distinguished name. In order to be compatible with other implementations (particularly TLS implementations), we constrain the allowed X.500 name attribute types to a subset of the minimum supported set for X.509 certificates (specified in RFC 3280), plus the locality attribute:
一个节点的名字必须是一个有效的 X.500 可区分的名字。为了同其他的实现方法兼容(特别是 TLS 实现),我们约束了允许使用的 X.500 的属性类型为最小被支持的 X.509 证书集合的一个子集(在 RFC 3280 中指定的),外加下边的本地属性:
- Organization (O)
- State (ST)
- Locality (L)
- Country (C)
- Organizational-unit (OU)
- Common name (CN)
Note that the serial number is intentionally excluded from Corda certificates in order to minimise scope for uncertainty in the distinguished name format. The distinguished name qualifier has been removed due to technical issues; consideration was given to “Corda” as qualifier, however the qualifier needs to reflect the Corda compatibility zone, not the technology involved. There may be many Corda namespaces, but only one R3 namespace on Corda. The ordering of attributes is important.
需要注意的是,serial number 在内部从 Corda 证书中排除了出去,为了将 distinguished name 格式中不确定性降到最小。distinguished name qualifier 被移除了,是因为技术的问题;将考量给了 “Corda” 让其作为 qualifier,然而这个 qualifier 需要反映 Corda 的 compatibility zone,而不是引入的技术。这里可能有很多 Corda 的命名空间,但是在 Corda 上只有一个 R3 命名空间。这些属性的顺序是非常重要的。
State
should be avoided unless required to differentiate from other localities
with the same or similar names at the
country level. For example, London (GB) would not need a state
, but St Ives would (there are two, one in Cornwall, one
in Cambridgeshire). As legal entities in Corda are likely to be located in major cities, this attribute is not expected to be
present in the majority of names, but is an option for the cases which require it.
State
应该避免使用除非是为了跟国家级别其他的相同的或者类似名字的 localities
加以区分。比如,伦敦(GB)就不需要一个 state
,但是 St Ives 需要(这里有两个,一个是在 Cornwall,一个是在 Cambridgeshire)。因为在 Corda 中 legal entities 可能都是在主要城市中,这个属性不被期望体现在主要的名字里,但是在需要的时候可以做为一个可选项。
The name must also obey the following constraints:
The
organisation
,locality
andcountry
attributes are present- The
state
,organisational-unit
andcommon name
attributes are optional
- The
The fields of the name have the following maximum character lengths:
- Common name: 64
- Organisation: 128
- Organisation unit: 64
- Locality: 64
- State: 64
The
country
attribute is a valid ISO 3166-1<https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2> two letter code in upper-case- The
organisation
field of the name obeys the following constraints: - Has at least two letters
- Does not include the following characters:
,
,"
,\
- Is in NFKC normalization form
- Does not contain the null character
- Only the latin, common and inherited unicode scripts are supported
- No double-spacing
- The
This is to avoid right-to-left issues, debugging issues when we can’t pronounce names over the phone, and character confusability attacks.
命名还必须要遵守下边的约束:
Organisation
、locality
和country
属性必须要有State
、organisational-unit
和common name
属性不是必须要有的
名字的字段有以下最大长度的限制:
- Common name: 64
- Organisation: 128
- Organisation unit: 64
- Locality: 64
- State: 64
Country
属性必须是一个有效的大写的 ISO 3166-1<https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2> 的两个英文字母- 名字里的
organisation
字段需要满足下边的约束: - 至少包含两个字符
- 首部和尾部不能含有空格
- 不能包含以下字符:
,
,"
,\
- 是 NFKC 常规化形式
- 不能包含 null 字符
- 仅仅支持拉丁文、常见和继承的 unicode 脚本
- 没有双空格(double-spacing)
- 名字里的
这个是为了避免当我们不能够在电话中独处名字的发音的时候,由左向右的问题,debugging 问题,以及字符迷惑攻击(character confusability attacks)
注解
The network operator of a Corda Network may put additional constraints on node naming in place.
注解
一个 Corda 网络的网络维护者可能会在节点名字上加入一些额外的约束。
外部的 identifiers¶
Mappings to external identifiers such as Companies House nos., LEI, BIC, etc. should be stored in custom X.509 certificate extensions. These values may change for operational reasons, without the identity they’re associated with necessarily changing, and their inclusion in the distinguished name would cause significant logistical complications. The OID and format for these extensions will be described in a further specification.
映射到外部的 identifiers,比如 Companies House nos., LEI, BIC, 等等。因为被存储在 X.509 证书扩展中。这些值可能会因为维护的原因被改动,而不需要改动他们相关的 identity,如果把它们加到了 distinguished name 的话,会造成非常大的逻辑上的复杂性。OID 记忆对这些扩展的格式会在之后的详细说明中描述。
节点的配置¶
目录
配置文件路径¶
When starting a node, the corda.jar
file defaults to reading the node’s configuration from a node.conf
file in the directory from which the command to launch Corda is executed.
There are two command-line options to override this behaviour:
- The
--config-file
command line option allows you to specify a configuration file with a different name, or in a different file location. Paths are relative to the current working directory - The
--base-directory
command line option allows you to specify the node’s workspace location. Anode.conf
configuration file is then expected in the root of this workspace.
当启动一个节点的时候,corda.jar
默认地会去从加载 Corda 的命令行所在的路径下的一个 node.conf
中读取节点的配置信息。有两个命令行可选项能够重写这个行为:
--config-file
命令行可选项允许你指定一个不同名字或者存放在不同位置的配置文件。路径是当前工作路径的相对路径--base-directory
命令行可选项允许你指定节点的工作路径。一个node.conf
配置文件应该在工作空间的根路径下。
If you specify both command line arguments at the same time, the node will fail to start.
如果你同时指定了两个命令行参数,节点将会启动失败。
配置文件格式¶
The Corda configuration file uses the HOCON format which is a superset of JSON. Please visit https://github.com/typesafehub/config/blob/master/HOCON.md for further details.
Corda 配置文件使用 HOCON 格式,它是 JSON 的 superset。它有一些特性对于配置文件的格式有很多好处。浏览 https://github.com/typesafehub/config/blob/master/HOCON.md 了解更多。
Please do NOT use double quotes ("
) in configuration keys.
在配置 keys 中请不要使用双引号 ("
)。
Node setup will log Config files should not contain " in property names. Please fix: [key]
as an error when it finds double quotes around keys.
This prevents configuration errors when mixing keys containing .
wrapped with double quotes and without them e.g.: The property
"dataSourceProperties.dataSourceClassName" = "val"
in Reference.conf would be not overwritten by the property
dataSourceProperties.dataSourceClassName = "val2"
in node.conf.
当发现在 keys 中含有双引号的时候,节点启动将会 log Config files should not contain " in property names. Please fix: [key]
的错误。这个可以避免当 keys 包含使用或者不适用双引号来包括 .``的混合 keys 的时候的错误,比如在 `Reference.conf`_ 中的 ``"dataSourceProperties.dataSourceClassName" = "val"
属性不会被在 node.conf 中的 dataSourceProperties.dataSourceClassName = "val2"
属性所重写。
By default the node will fail to start in presence of unknown property keys.
To alter this behaviour, the on-unknown-config-keys
command-line argument can be set to IGNORE
(default is FAIL
).
如果出现了一些未知的属性 keys 的话,默认地,节点会启动失败。
重写 node.conf 里的值¶
- Environment variables
- For example:
${NODE_TRUST_STORE_PASSWORD}
would be replaced by the contents of environment variableNODE_TRUST_STORE_PASSWORD
(see: Logging section). - 环境变量
- 比如
${NODE_TRUST_STORE_PASSWORD}
将会被环境变量NODE_TRUST_STORE_PASSWORD
的内容所替换(查看 Logging 部分)。 - JVM options
- JVM options or environmental variables prefixed with
corda.
can overridenode.conf
fields. Provided system properties can also set values for absent fields innode.conf
. This is an example of adding/overriding the keyStore password : - JVM 选项
JVM 选项或者以
corda
开始的环境变量能够重载node.conf
中的字段。提供的系统属性同样也可以设置在node.conf
中缺少的字段。下边是一个添加/重载 keyStore 密码的例子:java -Dcorda.rpcSettings.ssl.keyStorePassword=mypassword -jar node.jar
配置文件字段¶
注解
The available configuration fields are listed below in alphabetic order.
注解
可用的配置字段按照字母顺序排列:
- additionalP2PAddresses
An array of additional host:port values, which will be included in the advertised NodeInfo in the network map in addition to the p2pAddress. Nodes can use this configuration option to advertise HA endpoints and aliases to external parties.
一个额外的 host:port 值的数组,这些会被包含在网络地图中除了 p2pAddress 以外的 advertised NodeInfo 中。
Default: empty list
- attachmentContentCacheSizeMegaBytes
Optionally specify how much memory should be used to cache attachment contents in memory.
可选的设置使用多少内存来 cache attachment 内容。
Default: 10MB
- attachmentCacheBound
Optionally specify how many attachments should be cached locally. Note that this includes only the key and metadata, the content is cached separately and can be loaded lazily.
可选的设置应该有多少 attachments 应该被 cache 到本地。注意这个仅仅包含 key 和 metadata,内容是被单独 cache 并且可以被懒加载。
Default: 1024
- compatibilityZoneURL (deprecated)
The root address of the Corda compatibility zone network management services, it is used by the Corda node to register with the network and obtain a Corda node certificate, (See Network certificates for more information.) and also is used by the node to obtain network map information. Cannot be set at the same time as the networkServices option.
Important: old configuration value, please use networkServices
重要:这是一个旧的配置值,请使用 networkServices
Default: not defined
- cordappSignerKeyFingerprintBlacklist
List of the public keys fingerprints (SHA-256 of public key hash) not allowed as Cordapp JARs signers. The node will not load Cordapps signed by those keys. The option takes effect only in production mode and defaults to Corda development keys (
["56CA54E803CB87C8472EBD3FBC6A2F1876E814CEEBF74860BD46997F40729367", "83088052AF16700457AE2C978A7D8AC38DD6A7C713539D00B897CD03A5E5D31D"]
), in development mode any key is allowed to sign Cordpapp JARs.Default: not defined
- crlCheckSoftFail
This is a boolean flag that when enabled (i.e.
true
value is set) causes certificate revocation list (CRL) checking to use soft fail mode. Soft fail mode allows the revocation check to succeed if the revocation status cannot be determined because of a network error. If this parameter is set tofalse
rigorous CRL checking takes place. This involves each certificate in the certificate path being checked for a CRL distribution point extension, and that this extension points to a URL serving a valid CRL. This means that if any CRL URL in the certificate path is inaccessible, the connection with the other party will fail and be marked as bad. Additionally, if any certificate in the hierarchy, including the self-generated node SSL certificate, is missing a valid CRL URL, then the certificate path will be marked as invalid.Default: true
- custom
Set custom command line attributes (e.g. Java system properties) on the node process via the capsule launcher
- jvmArgs:
A list of JVM arguments to apply to the node process. This removes any defaults specified from
corda.jar
, but can be overridden from the command line. See Setting JVM arguments for examples and details on the precedence of the different approaches to settings arguments.Default: not defined
- database
Database configuration
- transactionIsolationLevel:
Transaction isolation level as defined by the
TRANSACTION_
constants injava.sql.Connection
, but without theTRANSACTION_
prefix.就像在
java.sql.Connection
中由TRANSACTION_
常量定义的事务隔离级别(transaction isolation level),但是没有TRANSACTION_
前缀。Default:
REPEATABLE_READ
- exportHibernateJMXStatistics:
Whether to export Hibernate JMX statistics.
是否导出 Hibernate JMX statistics
Caution: enabling this option causes expensive run-time overhead
注意:开启这个选项会造成昂贵的 run-time overhead
Default: false
- initialiseSchema
Boolean which indicates whether to update the database schema at startup (or create the schema when node starts for the first time). If set to
false
on startup, the node will validate if it’s running against a compatible database schema.Default: true
- initialiseAppSchema
The property allows to override
database.initialiseSchema
for the Hibernate DDL generation for CorDapp schemas.UPDATE
performs an update of CorDapp schemas, whileVALID
only verifies their integrity andNONE
performs no check. WheninitialiseSchema
is set tofalse
, theninitialiseAppSchema
may be set asVALID
orNONE
only.Default: CorDapp schema creation is controlled with
initialiseSchema
.
- dataSourceProperties
This section is used to configure the JDBC connection and database driver used for the node’s persistence. Node database contains example configurations for other database providers. To add additional data source properties (for a specific JDBC driver) use the
dataSource.
prefix with the property name (e.g. dataSource.customProperty = value).这部分是用来配置 JDBC 连接和数据库驱动的,用来对节点数据进行持久化处理。Node database 包含了对于其他的数据库 driver 的例子。使用
dataSource.
前缀加上属性名字(比如 dataSource.customProperty = value)来添加额外的数据源属性(对于一个指定的 JDBC driver)。- dataSourceClassName
- JDBC Data Source class name.
- dataSource.url
- JDBC database URL.
- dataSource.user
- Database user.
- dataSource.password
- Database password.
Default:
dataSourceClassName = org.h2.jdbcx.JdbcDataSource dataSource.url = "jdbc:h2:file:"${baseDirectory}"/persistence;DB_CLOSE_ON_EXIT=FALSE;WRITE_DELAY=0;LOCK_TIMEOUT=10000" dataSource.user = sa dataSource.password = ""
- detectPublicIp
This flag toggles the auto IP detection behaviour. If enabled, on startup the node will attempt to discover its externally visible IP address first by looking for any public addresses on its network interfaces, and then by sending an IP discovery request to the network map service. Set to
true
to enable.这个标志值开启/关闭了是否自动发现 IP 的功能,默认是开启的。当节点启动的时候,它会尝试通过在它的网络接口上查找公用地址的方式去发现自己的外部可见的 IP 地址,然后会向 network map service 发送一个 IP 发现请求。将它设置为
true
来开启这个功能。Default: false
- devMode
This flag sets the node to run in development mode. On startup, if the keystore
<workspace>/certificates/sslkeystore.jks
does not exist, a developer keystore will be used ifdevMode
is true. The node will exit ifdevMode
is false and the keystore does not exist.devMode
also turns on background checking of flow checkpoints to shake out any bugs in the checkpointing process. Also, ifdevMode
is true, Hibernate will try to automatically create the schema required by Corda or update an existing schema in the SQL database; ifdevMode
is false, Hibernate will simply validate the existing schema, failing on node start if the schema is either not present or not compatible. If no value is specified in the node configuration file, the node will attempt to detect if it’s running on a developer machine and setdevMode=true
in that case. This value can be overridden from the command line using the--dev-mode
option.这个标志设定了节点是否是在开发模式下运行。在节点启动的时候,如果
<workspace>/certificates/sslkeystore.jks
的 keystore 文件不存在的话,如果devMode
是 true 的话,那么一个开发者 keystore 会被使用。如果devMode
设置为 false 并且 keystore 不存在的话,那么节点就会退出。devMode
同样也会打开后台对 flow checkpoints 的检查,来找到在 checkpointing 流程中存在的 bugs。并且,如果devMode
是 true 的话,Hibernate 会在 SQL 数据库中尝试自动地创建 Corda 要求的 schema 或者更新一个已经存在的 schema。如果devMode
是 false 的话,Hibernate 会简单地验证一个已经存在的 schema,如果这个 schema 不存在或者不兼容的话,那么节点就会启动失败。如果在节点配置文件中没有指定值的话,节点就会尝试发现节点是否运行在一个开发者的机器上,如果是的话会设置devMode=true
。Default: Corda will try to establish based on OS environment
- devModeOptions
Allows modification of certain
devMode
featuresImportant: This is an unsupported configuration.
- allowCompatibilityZone
Allows a node configured to operate in development mode to connect to a compatibility zone.
Default: not defined
- emailAddress
The email address responsible for node administration, used by the Compatibility Zone administrator.
Default: company@example.com
- extraNetworkMapKeys
An optional list of private network map UUIDs. Your node will fetch the public network and private network maps based on these keys. Private network UUID should be provided by network operator and lets you see nodes not visible on public network.
Important: This is a temporary feature for onboarding network participants that limits their visibility for privacy reasons.
Default: not defined
- flowMonitorPeriodMillis
Duration of the period suspended flows waiting for IO are logged.
Default: 60 seconds
- flowMonitorSuspensionLoggingThresholdMillis
Threshold duration suspended flows waiting for IO need to exceed before they are logged.
Default: 60 seconds
- flowTimeout
When a flow implementing the
TimedFlow
interface and setting theisTimeoutEnabled
flag does not complete within a defined elapsed time, it is restarted from the initial checkpoint. Currently only used for notarisation requests with clustered notaries: if a notary cluster member dies while processing a notarisation request, the client flow eventually times out and gets restarted. On restart the request is resent to a different notary cluster member in a round-robin fashion. Note that the flow will keep retrying forever.- timeout
The initial flow timeout period.
Default: 30 seconds
- maxRestartCount
The number of retries the back-off time keeps growing for. For subsequent retries, the timeout value will remain constant.
Default: 6
- backoffBase
The base of the exponential backoff, t_{wait} = timeout * backoffBase^{retryCount}
Default: 1.8
- h2Port (deprecated)
Defines port for h2 DB.
Important: Deprecated please use h2Setting instead
- h2Settings
Sets the H2 JDBC server host and port. See 访问 H2 数据库. For non-localhost address the database password needs to be set in
dataSourceProperties
.Default: NULL
- jarDirs
An optional list of file system directories containing JARs to include in the classpath when launching via
corda.jar
only. Each should be a string. Only the JARs in the directories are added, not the directories themselves. This is useful for including JDBC drivers and the like. e.g.jarDirs = [ ${baseDirectory}"/libs" ]
. (Note that you have to use thebaseDirectory
substitution value when pointing to a relative path).一个可选的包含 JARs 的文件系统路径列表,仅仅在通过
corda.jar
加载的时候会被包含在 classpath 中。每个应该是字符串。只有在路径下的 JARs 才会被加载,而不是路径本身。这个对于包括 JDBC drivers 的时候会很有用。比如:jarDirs = [ ${baseDirectory}"/libs" ]
。(注意当指向一个相对路径的时候,你需要使用baseDirectory
的替代值)Default: not defined
- jmxMonitoringHttpPort
If set, will enable JMX metrics reporting via the Jolokia HTTP/JSON agent on the corresponding port. Default Jolokia access url is http://127.0.0.1:port/jolokia/
Default: not defined
- jmxReporterType
Provides an option for registering an alternative JMX reporter. Available options are
JOLOKIA
andNEW_RELIC
.The Jolokia configuration is provided by default. The New Relic configuration leverages the Dropwizard NewRelicReporter solution. See Introduction to New Relic for Java for details on how to get started and how to install the New Relic Java agent.
Default:
JOLOKIA
- keyStorePassword
The password to unlock the KeyStore file (
<workspace>/certificates/sslkeystore.jks
) containing the node certificate and private key.解锁 KeyStore 文件(
<workspace>/certificates/sslkeystore.jks
)的密码,KeyStore 文件中包含了节点的证书(certificate)和私钥(private key)。Important: This is the non-secret value for the development certificates automatically generated during the first node run. Longer term these keys will be managed in secure hardware devices.
Important: 这个非安全的值是用于开发目的的,在节点第一次运行的时候自动生成。长期来说,这些秘钥应该在安全硬件设备(secure hardware devices)中进行管理。
Default: cordacadevpass
- lazyBridgeStart
Internal option.
Important: Please do not change.
Default: true
- messagingServerAddress
The address of the ArtemisMQ broker instance. If not provided the node will run one locally.
ArtemisMA broker 实例的地址。如果没有指定的话,节点会在本地运行一个。
Default: not defined
- messagingServerExternal
If
messagingServerAddress
is specified the default assumption is that the artemis broker is running externally. Setting this tofalse
overrides this behaviour and runs the artemis internally to the node, but bound to the address specified inmessagingServerAddress
. This allows the address and port advertised inp2pAddress
to differ from the local binding, especially if there is external remapping by firewalls, load balancers , or routing rules. Note thatdetectPublicIp
should be set tofalse
to ensure that no translation of thep2pAddress
occurs before it is sent to the network map.Default: not defined
- myLegalName
The legal identity of the node. This acts as a human-readable alias to the node’s public key and can be used with the network map to look up the node’s info. This is the name that is used in the node’s certificates (either when requesting them from the doorman, or when auto-generating them in dev mode). At runtime, Corda checks whether this name matches the name in the node’s certificates. For more details please read 节点的命名 chapter.
节点的法律标识(legal identity)。
Default: not defined
- notary
Optional configuration object which if present configures the node to run as a notary. If part of a Raft or BFT-SMaRt cluster then specify
raft
orbftSMaRt
respectively as described below. If a single node notary then omit both.这是一个可选的配置对象,如果添加了这个配置项那么该节点就会作为 notary 来运行。如果是一个 Raft 或者 BFT-SMaRt 集群的一部分,那么像下边描述的那样去指定
raft
或者bftSMaRt
。如果是单一的一个 notary,那么请忽略他们。- validating
Boolean to determine whether the notary is a validating or non-validating one.
Boolean 值,确定一个 notary 节点是否是一个 validating notary
Default: false
- serviceLegalName
If the node is part of a distributed cluster, specify the legal name of the cluster. At runtime, Corda checks whether this name matches the name of the certificate of the notary cluster.
Default: not defined
- raft
(Experimental) If part of a distributed Raft cluster, specify this configuration object with the following settings:
(探索性的) 如果该节点是一个分布式 Raft 集群的一部分的话,那么使用下边的配置指定这个配置对象:
- nodeAddress
The host and port to which to bind the embedded Raft server. Note that the Raft cluster uses a separate transport layer for communication that does not integrate with ArtemisMQ messaging services.
绑定到内置的 Raft server 的 host 和 port。注意:Raft 集群使用一个独立的 transport 层来进行沟通,这个并没有跟 ArtemisMQ 消息服务集成
Default: not defined
- clusterAddresses
Must list the addresses of all the members in the cluster. At least one of the members must be active and be able to communicate with the cluster leader for the node to join the cluster. If empty, a new cluster will be bootstrapped.
必须要列出这个 Raft 集群中所有成员的地址。如果节点想要加入一个集群,这些成员中至少要有一个是运行的状态并且能够跟集群的 leader 进行沟通。如果是空的,一个新的集群会被启动
Default: not defined
- bftSMaRt
(Experimental) If part of a distributed BFT-SMaRt cluster, specify this configuration object with the following settings:
(探索性的) 如果该节点是一个分布式的 BFT-SMaRt 集群的一部分的话,那么使用下边的配置指定这个配置对象:
- replicaId
The zero-based index of the current replica. All replicas must specify a unique replica id.
当前的 replica 的从 0 开始的 index 值。所有的 replicas 必须要指定一个唯一的 replica id
Default: not defined
- clusterAddresses
Must list the addresses of all the members in the cluster. At least one of the members must be active and be able to communicate with the cluster leader for the node to join the cluster. If empty, a new cluster will be bootstrapped.
必须要列出这个 Raft 集群中所有成员的地址。如果节点想要加入一个集群,这些成员中至少要有一个是运行的状态并且能够跟集群的 leader 进行沟通。如果是空的,一个新的集群会被启动
Default: not defined
- networkParameterAcceptanceSettings
Optional settings for managing the network parameter auto-acceptance behaviour. If not provided then the defined defaults below are used.
- autoAcceptEnabled
This flag toggles auto accepting of network parameter changes. If a network operator issues a network parameter change which modifies only auto-acceptable options and this behaviour is enabled then the changes will be accepted without any manual intervention from the node operator. See 网络地图 for more information on the update process and current auto-acceptable parameters. Set to
false
to disable.Default: true
- excludedAutoAcceptableParameters
List of auto-acceptable parameter names to explicitly exclude from auto-accepting. Allows a node operator to control the behaviour at a more granular level.
Default: empty list
- networkServices
If the Corda compatibility zone services, both network map and registration (doorman), are not running on the same endpoint and thus have different URLs then this option should be used in place of the
compatibilityZoneURL
setting.Important: Only one of ``compatibilityZoneURL`` or ``networkServices`` should be used.
- doormanURL
Root address of the network registration service.
Default: not defined
- networkMapURL
Root address of the network map service.
Default: not defined
- pnm
Optional UUID of the private network operating within the compatibility zone this node should be joining.
Default: not defined
- p2pAddress
The host and port on which the node is available for protocol operations over ArtemisMQ.
通过 ArtemisMQ 对节点进行协议操作时的可用的主机(host)和端口号
In practice the ArtemisMQ messaging services bind to all local addresses on the specified port. However, note that the host is the included as the advertised entry in the network map. As a result the value listed here must be externally accessible when running nodes across a cluster of machines. If the provided host is unreachable, the node will try to auto-discover its public one.
常规来说,ArtemisMQ 消息服务会绑定到 所有的本地地址 的指定的端口号。但是,要注意的是主机(host)会在 NetworkMapService 中作为 advertised entry 被包含进来。所以当在机器集群上运行节点的时候,这里所列出的值都应该是可以被外边访问的。如果这个提供的 host 无法访问,节点会尝试自动发现它的共有 host。
Default: not defined
- rpcAddress (deprecated)
The address of the RPC system on which RPC requests can be made to the node. If not provided then the node will run without RPC.
RPC 系统的地址,RPC 请求可以通过它来发送给节点。如果没有指定的话,节点就会不使用 RPC 并运行。
Important: Deprecated. Use rpcSettings instead.
重要: 已废弃,请使用 rpcSettings
Default: not defined
- rpcSettings
Options for the RPC server exposed by the Node.
节点暴露的 RPC server 的选项
Important: The RPC SSL certificate is used by RPC clients to authenticate the connection. The Node operator must provide RPC clients with a truststore containing the certificate they can trust. We advise Node operators to not use the P2P keystore for RPC. The node can be run with the “generate-rpc-ssl-settings” command, which generates a secure keystore and truststore that can be used to secure the RPC connection. You can use this if you have no special requirements.
- address
host and port for the RPC server binding.
RPC server 要绑定的 host 和 port
Default: not defined
- adminAddress
host and port for the RPC admin binding (this is the endpoint that the node process will connect to).
RPC admin 绑定的 host 和 port(这是节点进程将会连接到的 endpoint)
Default: not defined
- standAloneBroker
boolean, indicates whether the node will connect to a standalone broker for RPC.
boolean 值,指定节点是否为 RPC 连接到一个独立的 broker
Default: false
- useSsl
boolean, indicates whether or not the node should require clients to use SSL for RPC connections.
boolean 值,指定节点是否要求 clients 使用 SSL 来进行 RPC 连接
Default: false
- ssl
(mandatory if
useSsl=true
) SSL settings for the RPC server.RPC server 的 SSL 配置,如果
useSsl=true
就是必须的。- keyStorePath
- Absolute path to the key store containing the RPC SSL certificate.
Default: not defined
- keyStorePassword
Password for the key store.
key store 的密码
Default: not defined
- rpcUsers
A list of users who are authorised to access the RPC system. Each user in the list is a configuration object with the following fields:
一个有权限访问 RPC 系统的用户列表。列表中的每个用户都是一个包含下列字段的配置对象:
- username
Username consisting only of word characters (a-z, A-Z, 0-9 and _)
用户名只能包含英文字母(a-z,A-Z,0-9 和 _)
Default: not defined
- password
The password
Default: not defined
- permissions
A list of permissions for starting flows via RPC. To give the user the permission to start the flow
foo.bar.FlowClass
, add the stringStartFlow.foo.bar.FlowClass
to the list. If the list contains the stringALL
, the user can start any flow via RPC. This value is intended for administrator users and for development.一个通过 RPC 可以启动的 flows 的权限列表。为了给一个用户能够启动
foo.bar.FlowClass
这个 flow 的权限,需要将字符串StartFlow.foo.bar.FlowClass
加到列表中。如果列表包含了字符串ALL
,用户就可以通过 RPC 启动任何的 flow。这个值主要是针对于 administrator 用户以及部署时候使用。Default: not defined
- security
Contains various nested fields controlling user authentication/authorization, in particular for RPC accesses. See 与节点互动 for details.
包含了多个嵌套的字段来管理用户的 authentication/authorization,具体就是对于 RPC 的访问控制。查看 与节点互动 了解详细内容。
- sshd
If provided, node will start internal SSH server which will provide a management shell. It uses the same credentials and permissions as RPC subsystem. It has one required parameter.
如果提供了这个选项,节点会启动内部的 SSH server,这会提供一个管理 shell。这个会跟 RPC subsystem 使用相同的用户信息和权限。它包含一个必须的参数。
- port
The port to start SSH server on e.g.
sshd { port = 2222 }
.启动 SSH server 的 port,比如
sshd { port = 2222 }
Default: not defined
- systemProperties
An optional map of additional system properties to be set when launching via
corda.jar
only. Keys and values of the map should be strings. e.g.systemProperties = { visualvm.display.name = FooBar }
一个可选的额外的系统属性的 map,仅仅在通过
corda.jar
加载的时候会设置这些属性。这个 map 中的 Keys 和 values 应该是字符串。比如:systemProperties = { visualvm.display.name = FooBar }
Default: not defined
- transactionCacheSizeMegaBytes
Optionally specify how much memory should be used for caching of ledger transactions in memory.
可选的来配置应该使用多少内存来 cache ledger transactions
Default: 8 MB plus 5% of all heap memory above 300MB.
- tlsCertCrlDistPoint
CRL distribution point (i.e. URL) for the TLS certificate. Default value is NULL, which indicates no CRL availability for the TLS certificate.
Important: This needs to be set if crlCheckSoftFail is false (i.e. strict CRL checking is on).
Default: NULL
- tlsCertCrlIssuer
CRL issuer (given in the X500 name format) for the TLS certificate. Default value is NULL, which indicates that the issuer of the TLS certificate is also the issuer of the CRL.
Important: If this parameter is set then `tlsCertCrlDistPoint` needs to be set as well.
Default: NULL
- trustStorePassword
The password to unlock the Trust store file (
<workspace>/certificates/truststore.jks
) containing the Corda network root certificate. This is the non-secret value for the development certificates automatically generated during the first node run.解锁 Trust store 文件(
<workspace>/certificates/truststore.jks
)的密码,Trust store 文件包含了 Corda 网络的根证书(root certificate)。这个非安全的值是用于开发目的的,在节点第一次运行的时候自动生成。Default: trustpass
- useTestClock
Internal option.
Important: Please do not change.
Default: false
- verfierType
Internal option.
Important: Please do not change.
Default: InMemory
Reference.conf¶
A set of default configuration options are loaded from the built-in resource file /node/src/main/resources/reference.conf
.
This file can be found in the :node
gradle module of the Corda repository.
Any options you do not specify in your own node.conf
file will use these defaults.
一系列的默认配置选项从内置的源文件 /node/src/main/resources/reference.conf
加载进来。这个文件可以在 Corda repository 的 :node
gradle module 中找到。任何你没有在你的 node.conf
指定的选项都会使用这些默认值。
Here are the contents of the reference.conf
file:
下边是 reference.conf
的内容:
additionalP2PAddresses = []
crlCheckSoftFail = true
database = {
transactionIsolationLevel = "REPEATABLE_READ"
exportHibernateJMXStatistics = "false"
}
dataSourceProperties = {
dataSourceClassName = org.h2.jdbcx.JdbcDataSource
dataSource.url = "jdbc:h2:file:"${baseDirectory}"/persistence;DB_CLOSE_ON_EXIT=FALSE;WRITE_DELAY=0;LOCK_TIMEOUT=10000"
dataSource.user = sa
dataSource.password = ""
}
emailAddress = "admin@company.com"
flowTimeout {
timeout = 30 seconds
maxRestartCount = 6
backoffBase = 1.8
}
jmxReporterType = JOLOKIA
keyStorePassword = "cordacadevpass"
lazyBridgeStart = true
rpcSettings = {
useSsl = false
standAloneBroker = false
}
trustStorePassword = "trustpass"
useTestClock = false
verifierType = InMemory
配置样例¶
Node configuration hosting the IRSDemo services¶
General node configuration file for hosting the IRSDemo services
myLegalName = "O=Bank A,L=London,C=GB"
keyStorePassword = "cordacadevpass"
trustStorePassword = "trustpass"
crlCheckSoftFail = true
dataSourceProperties {
dataSourceClassName = org.h2.jdbcx.JdbcDataSource
dataSource.url = "jdbc:h2:file:"${baseDirectory}"/persistence"
dataSource.user = sa
dataSource.password = ""
}
p2pAddress = "my-corda-node:10002"
rpcSettings {
useSsl = false
standAloneBroker = false
address = "my-corda-node:10003"
adminAddress = "my-corda-node:10004"
}
rpcUsers = [
{ username=user1, password=letmein, permissions=[ StartFlow.net.corda.protocols.CashProtocol ] }
]
devMode = true
Simple notary configuration file¶
myLegalName = "O=Notary Service,OU=corda,L=London,C=GB"
keyStorePassword = "cordacadevpass"
trustStorePassword = "trustpass"
p2pAddress = "localhost:12345"
rpcSettings {
useSsl = false
standAloneBroker = false
address = "my-corda-node:10003"
adminAddress = "my-corda-node:10004"
}
notary {
validating = false
}
devMode = false
networkServices {
doormanURL = "https://cz.example.com"
networkMapURL = "https://cz.example.com"
}
Node configuration with diffrent URL for NetworkMap and Doorman¶
Configuring a node where the Corda Compatibility Zone’s registration and Network Map services exist on different URLs
myLegalName = "O=Bank A,L=London,C=GB"
keyStorePassword = "cordacadevpass"
trustStorePassword = "trustpass"
crlCheckSoftFail = true
dataSourceProperties {
dataSourceClassName = org.h2.jdbcx.JdbcDataSource
dataSource.url = "jdbc:h2:file:"${baseDirectory}"/persistence"
dataSource.user = sa
dataSource.password = ""
}
p2pAddress = "my-corda-node:10002"
rpcSettings {
useSsl = false
standAloneBroker = false
address = "my-corda-node:10003"
adminAddress = "my-corda-node:10004"
}
rpcUsers = [
{ username=user1, password=letmein, permissions=[ StartFlow.net.corda.protocols.CashProtocol ] }
]
devMode = false
networkServices {
doormanURL = "https://registration.example.com"
networkMapURL = "https://cz.example.com"
}
Node command-line options¶
The node can optionally be started with the following command-line options:
--base-directory
,-b
: The node working directory where all the files are kept (default:.
).--config-file
,-f
: The path to the config file. Defaults tonode.conf
.--dev-mode
,-d
: Runs the node in development mode. Unsafe in production. Defaults to true on MacOS and desktop versions of Windows. False otherwise.--no-local-shell
,-n
: Do not start the embedded shell locally.--on-unknown-config-keys <[FAIL,INFO]>
: How to behave on unknown node configuration. Defaults to FAIL.--sshd
: Enables SSH server for node administration.--sshd-port
: Sets the port for the SSH server. If not supplied and SSH server is enabled, the port defaults to 2222.--verbose
,--log-to-console
,-v
: If set, prints logging to the console as well as to a file.--logging-level=<loggingLevel>
: Enable logging at this level and higher. Possible values: ERROR, WARN, INFO, DEBUG, TRACE. Default: INFO.--help
,-h
: Show this help message and exit.--version
,-V
: Print version information and exit.
Sub-commands¶
clear-network-cache
: Clears local copy of network map, on node startup it will be restored from server or file system.
initial-registration
: Starts initial node registration with the compatibility zone to obtain a certificate from the Doorman.
Parameters:
--network-root-truststore
,-t
required: Network root trust store obtained from network operator.--network-root-truststore-password
,-p
: Network root trust store password obtained from network operator.
generate-node-info
: Performs the node start-up tasks necessary to generate the nodeInfo file, saves it to disk, then exits.
generate-rpc-ssl-settings
: Generates the SSL keystore and truststore for a secure RPC connection.
install-shell-extensions
: Install corda
alias and auto completion for bash and zsh. See Shell extensions for CLI Applications for more info.
validate-configuration
: Validates the actual configuration without starting the node.
Enabling remote debugging¶
To enable remote debugging of the node, run the node with the following JVM arguments:
java -Dcapsule.jvm.args="-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005" -jar corda.jar
This will allow you to attach a debugger to your node on port 5005.
管理节点¶
Logging¶
By default the node log files are stored to the logs
subdirectory of the working directory and are rotated from time
to time. You can have logging printed to the console as well by passing the --log-to-console
command line flag.
The default logging level is INFO
which can be adjusted by the --logging-level
command line argument. This configuration
option will affect all modules. Hibernate (the JPA provider used by Corda) specific log messages of level WARN
and above
will be logged to the diagnostic log file, which is stored in the same location as other log files (logs
subdirectory
by default). This is because Hibernate may log messages at WARN and ERROR that are handled internally by Corda and do not
need operator attention. If they do, they will be logged by Corda itself in the main node log file.
默认的,节点的 log 文件会存储在工作目录下的 logs
子目录中,并且会随时的更新。你可以通过传入 `--log-to-console
命令行参数来同样将日志打印在 console 中。默认的日志等级是 INFO,可以通过 --logging-level
命令行参数来调整。这个配置选项会影响所有的模块。
It may be the case that you require to amend the log level of a particular subset of modules (e.g., if you’d like to take a
closer look at hibernate activity). So, for more bespoke logging configuration, the logger settings can be completely overridden
with a Log4j2 configuration file assigned to the log4j.configurationFile
system property.
有时候你可能想针对某个模块的 subset 变更 log 级别(比如你想更详细地查看 Hibernate activity)。所以,对于更加定制的 logging 配置,logger 的配置可以用分配给 log4j.configurationFile
系统属性一个 Log4j2 配置文件的通过 更多的自定义 logging,可以通过将一个 Log4j2 配置文件来彻底重写 logger 的设置。
The node is using log4j2 asynchronous logging by default (configured via log4j2 properties file in its resources)
to ensure that log message flushing is not slowing down the actual processing.
If you need to switch to synchronous logging (e.g. for debugging/testing purposes), you can override this behaviour
by adding -DLog4jContextSelector=org.apache.logging.log4j.core.selector.ClassLoaderContextSelector
to the node’s
command line or to the jvmArgs
section of the node configuration (see 节点的配置).
Example¶
Create a file sql.xml
in the current working directory. Add the following text :
在当前的工作目录中创建一个 sql.mxl
文件:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Logger name="org.hibernate" level="debug" additivity="false">
<AppenderRef ref="Console"/>
</Logger>
<Root level="error">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Note the addition of a logger named org.hibernate
that has set this particular logger level to debug
.
注意一个额外的名为 org.hibernate
的 logger 设置了指定的 logger level 为 debug
。
Now start the node as usual but with the additional parameter log4j.configurationFile
set to the filename as above, e.g.
像常规一样启动节点,但是带有一个额外的参数 log4j.configurationFile
,指向上边的文件名
java <Your existing startup options here> -Dlog4j.configurationFile=sql.xml -jar corda.jar
To determine the name of the logger, for Corda objects, use the fully qualified name (e.g., to look at node output
in more detail, use net.corda.node.internal.Node
although be aware that as we have marked this class internal
we
reserve the right to move and rename it as it’s not part of the public API as yet). For other libraries, refer to their
logging name construction. If you can’t find what you need to refer to, use the --logging-level
option as above and
then determine the logging module name from the console output.
为了确定 logger 的名字,对于 Corda 对象,使用完全合格的名字,fully qualified name(比如为了查看节点的 output 更详细的信息,可以使用 net.corda.node.internal.Node
,但是要知道我们给这个类标记为 internal
,我们保留移动或者改变名字的权利因为他还不是公开 API 的一部分)。对于其他的类库,参考他们的 logging 名字结构。如果你找不到你需要参考什么,像上边那样使用 --logging-level
选项然后从 console 的 output 中确定 logging 模块的名字。
SSH access¶
Node can be configured to run SSH server. See Node shell for details.
数据库访问¶
When running a node backed with a H2 database, the node can be configured to expose the database over a socket (see 访问 H2 数据库).
Note that in production, exposing the database via the node is not recommended.
监控你的节点¶
Like most Java servers, the node can be configured to export various useful metrics and management operations via the industry-standard JMX infrastructure. JMX is a standard API for registering so-called MBeans … objects whose properties and methods are intended for server management. As Java serialization in the node has been restricted for security reasons, the metrics can only be exported via a Jolokia agent.
像大多数的 Java servers 一样,节点通过业界标准的 JMX infrastructure 暴露了很多有用的 metrics 和管理维护方法。JMX 是一个标准的 API,通常被称为 MBeans… 对象,它的属性和方法通常被用来进行 server 管理。为了 export,它不需要任何指定的网络协议。所以从节点中可以使用多种方式来导出数据:一些监控系统提供一个“Java Agent”,它是一个查找所有的 MBeans 并且通过网络向他们发送一个统计数据搜集器(statistics collector)的 JVM plugin。对于这些系统,按照 vendor 提供的指导去使用他们。
Jolokia allows you to access the raw data and operations without connecting to the JMX port
directly. Nodes can be configured to export the data over HTTP on the /jolokia
HTTP endpoint, Jolokia defines the JSON and REST
formats for accessing MBeans, and provides client libraries to work with that protocol as well.
Jolokia 允许你不需要直接连接 JMX port 就可以访问 raw data 和维护操作。节点通过 HTTP 在 /jolokia
HTTP endpoint 上导出数据,Jolokia 定义了 JSON 和 REST 格式来访问 MBeans,也提供了客户端类库来跟这个协议一同工作。
Here are a few ways to build dashboards and extract monitoring data for a node:
- Hawtio is a web based console that connects directly to JVM’s that have been instrumented with a jolokia agent. This tool provides a nice JMX dashboard very similar to the traditional JVisualVM / JConsole MBbeans original.
- JMX2Graphite is a tool that can be pointed to /monitoring/json and will scrape the statistics found there, then insert them into the Graphite monitoring tool on a regular basis. It runs in Docker and can be started with a single command.
- JMXTrans is another tool for Graphite, this time, it’s got its own agent (JVM plugin) which reads a custom config file and exports only the named data. It’s more configurable than JMX2Graphite and doesn’t require a separate process, as the JVM will write directly to Graphite.
- Cloud metrics services like New Relic also understand JMX, typically, by providing their own agent that uploads the data to their service on a regular schedule.
- Telegraf is a tool to collect, process, aggregate, and write metrics. It can bridge any data input to any output using their plugin system, for example, Telegraf can be configured to collect data from Jolokia and write to DataDog web api.
有以下几种方式来创建 dashboard 并导出节点的监控数据:
- Hawtio 是一个基于 web 的 console,能够直接同使用 jolokia agent 的 JVM 连接。这个工具提供给了一个非常好的 JMX dashboard,跟传统的 JVisualVM/JConsole MBbenas original 很像。
- JMX2Graphite 是一个工具,可以用来指定/监控/json 并获得指标数据,然后将他们定期地插入到 Graphite 监控工具。它在 Docker 中运行,并且可以通过一个简单的命令来启动。
- JMXTrans 是对 Graphite 的另外一个工具,它会有一个自己的 agent(JVM plugin),用来读取一个自定义的 config 文件并且只导出 named data。相对于 JMX2Graphite 它更具有可配置性,并且不需要一个单独的 process,因为 JVM 会直接向 Graphite 中写入。
- Cloud metrics service 像 New Relic 同样理解 JMX,通常会提供他们自己的 agent,按照常规的计划来将数据上传到他们的服务中。
- Telegraf 是一个工具,来搜集、处理、聚合并且书写 metrics。它可以使用他们的 plugin 系统连接任何的数据输入和输出,比如 Telegraf 能够被配置来从 Jolokia 搜集数据,然后写到 DataDog web api。
The Node configuration parameter jmxMonitoringHttpPort has to be present in order to ensure a Jolokia agent is instrumented with the JVM run-time.
节点的配置参数 jmxMonitoringHttpPort 应该被设置来确保一个 Jolokia 代理会带有 JVM run-time。
The following JMX statistics are exported:
- Corda specific metrics: flow information (total started, finished, in-flight; flow duration by flow type), attachments (count)
- Apache Artemis metrics: queue information for P2P and RPC services
- JVM statistics: classloading, garbage collection, memory, runtime, threading, operating system
下边的 JMX statistics 可以被导出:
- Corda 指定的 metrics:flow 信息(总共开始的、结束的、in-flight 的 flow,不同 flow type 的 flow duration)
- Apache Artemis metrics:P2P 和 RPC 服务的 queue 信息
- JVM statistics:classloading、垃圾回收、内存、runtime、线程、操作系统
Notes for production use¶
When using Jolokia monitoring in production, it is recommended to use a Jolokia agent that reads the metrics from the node and pushes them to the metrics storage, rather than exposing a port on the production machine/process to the internet.
Also ensure to have restrictive Jolokia access policy in place for access to production nodes. The Jolokia access is controlled
via a file called jolokia-access.xml
.
Several Jolokia policy based security configuration files (jolokia-access.xml
) are available for dev, test, and prod
environments under /config/<env>
.
Notes for development use¶
When running in dev mode, Hibernate statistics are also available via the Jolkia interface. These are disabled otherwise
due to expensive run-time costs. They can be turned on and off explicitly regardless of dev mode via the
exportHibernateJMXStatistics
flag on the database configuration.
When starting Corda nodes using Cordformation runner (see Running nodes locally), you should see a startup message similar to the following: Jolokia: Agent started with URL http://127.0.0.1:7005/jolokia/
When starting Corda nodes using the ‘driver DSL’, you should see a startup message in the logs similar to the following: Starting out-of-process Node USA Bank Corp, debug port is not enabled, jolokia monitoring port is 7005 {}
The following diagram illustrates Corda flow metrics visualized using hawtio:

内存使用和优化¶
All garbage collected programs can run faster if you give them more memory, as they need to collect less frequently. As a default JVM will happily consume all the memory on your system if you let it, Corda is configured with a 512mb Java heap by default. When other overheads are added, this yields a total memory usage of about 800mb for a node (the overheads come from things like compiled code, metadata, off-heap buffers, thread stacks, etc).
对所有的垃圾搜集程序来说,如果你给他们更多的内存他们会运行的更快,因为他们会更少地需要去搜集。默认的如果你让 JVM 消耗掉你系统中的所有内存的话,那么 JVM 会很愿意那样去做的,Corda 默认会设置为相对比较小的 512mb Java heap。当其他的部分也在消耗内存的时候,一个节点的内存的总体使用量大概会在 800mb 左右(这些消耗可能来自于编译代码、metadata、off-heap buffers、线程栈等)。
If you want to make your node go faster and profiling suggests excessive GC overhead is the cause, or if your node is running out of memory, you can give it more by running the node like this:
如果你希望你的节点更快并且想要超过 GC 最大值的话,或者你的节点出现了 out of memory 的问题,你可以用下边的参数给节点分配更多的内存:
java -Dcapsule.jvm.args="-Xmx1024m" -jar corda.jar
The example command above would give a 1 gigabyte Java heap.
这个例子命令会提供一个 1 gigabyte Java heap。
注解
Unfortunately the JVM does not let you limit the total memory usage of Java program, just the heap size.
注解
JVM 不会允许你限制被 Java 程序所使用的内存,仅仅允许你可以修改 heap size。
Hiding sensitive data¶
A frequent requirement is that configuration files must not expose passwords to unauthorised readers. By leveraging environment variables, it is possible to hide passwords and other similar fields.
Take a simple node config that wishes to protect the node cryptographic stores:
myLegalName = "O=PasswordProtectedNode,OU=corda,L=London,C=GB"
keyStorePassword = ${KEY_PASS}
trustStorePassword = ${TRUST_PASS}
p2pAddress = "localhost:12345"
devMode = false
networkServices {
doormanURL = "https://cz.example.com"
networkMapURL = "https://cz.example.com"
}
By delegating to a password store, and using command substitution it is possible to ensure that sensitive passwords never appear in plain text.
The below examples are of loading Corda with the KEY_PASS and TRUST_PASS variables read from a program named corporatePasswordStore
.
Bash¶
KEY_PASS=$(corporatePasswordStore --cordaKeyStorePassword) TRUST_PASS=$(corporatePasswordStore --cordaTrustStorePassword) java -jar corda.jar
警告
If this approach is taken, the passwords will appear in the shell history.
Windows PowerShell¶
$env:KEY_PASS=$(corporatePasswordStore --cordaKeyStorePassword); $env:TRUST_PASS=$(corporatePasswordStore --cordaTrustStorePassword); java -jar corda.jar
For launching on Windows without PowerShell, it is not possible to perform command substitution, and so the variables must be specified manually, for example:
SET KEY_PASS=mypassword & SET TRUST_PASS=mypassword & java -jar corda.jar
警告
If this approach is taken, the passwords will appear in the windows command prompt history.
Backup recommendations¶
Various components of the Corda platform read their configuration from the file system, and persist data to a database or into files on disk. Given that hardware can fail, operators of IT infrastructure must have a sound backup strategy in place. Whilst blockchain platforms can sometimes recover some lost data from their peers, it is rarely the case that a node can recover its full state in this way because real-world blockchain applications invariably contain private information (e.g., customer account information). Moreover, this private information must remain in sync with the ledger state. As such, we strongly recommend implementing a comprehensive backup strategy.
The following elements of a backup strategy are recommended:
Database replication¶
When properly configured, database replication prevents data loss from occurring in case the database host fails. In general, the higher the number of replicas, and the further away they are deployed in terms of regions and availability zones, the more a setup is resilient to disasters. The trade-off is that, ideally, replication should happen synchronously, meaning that a high number of replicas and a considerable network latency will impact the performance of the Corda nodes connecting to the cluster. Synchronous replication is strongly advised to prevent data loss.
Database snapshots¶
Database replication is a powerful technique, but it is very sensitive to destructive SQL updates. Whether malicious or unintentional, a SQL statement might compromise data by getting propagated to all replicas. Without rolling snapshots, data loss due to such destructive updates will be irreversible. Using snapshots always implies some data loss in case of a disaster, and the trade-off is between highly frequent backups minimising such a loss, and less frequent backups consuming less resources. At present, Corda does not offer online updates with regards to transactions. Should states in the vault ever be lost, partial or total recovery might be achieved by asking third-party companies and/or notaries to provide all data relevant to the affected legal identity.
File backups¶
Corda components read and write information from and to the file-system. The advice is to backup the entire root directory of the component, plus any external directories and files optionally specified in the configuration. Corda assumes the filesystem is reliable. You must ensure that it is configured to provide this assurance, which means you must configure it to synchronously replicate to your backup/DR site. If the above holds, Corda components will benefit from the following:
- Guaranteed eventual processing of acknowledged client messages, provided that the backlog of persistent queues is not lost irremediably.
- A timely recovery from deletion or corruption of configuration files (e.g.,
node.conf
,node-info
files, etc.), database drivers, CorDapps binaries and configuration, and certificate directories, provided backups are available to restore from.
警告
Private keys used to sign transactions should be preserved with the utmost care. The recommendation is to keep at least two separate copies on a storage not connected to the Internet.
部署节点¶
注解
These instructions are intended for people who want to deploy a Corda node to a server, whether they have developed and tested a CorDapp following the instructions in 创建本地节点 or are deploying a third-party CorDapp.
注解
这个指导是为了需要将一个 Corda 节点部署到 server 上的开发者,他们或是按照在 创建本地节点 的指引来开发并测试了一个 CorDapp,或是在部署一个第三方的 CorDapp。
Linux:安装并且运行 Corda 作为一个系统的服务¶
We recommend creating system services to run a node and the optional webserver. This provides logging and service handling, and ensures the Corda service is run at boot.
我们建议创建一个系统服务来运行节点,还可以选择用系统服务来运行 webserver。这个会提供 logging 和 service handling,并且确保了 Corda 服务会在 server 启动时自动启动。
Prerequisites:
- A supported Java distribution. The supported versions are listed in 快速搭建 CorDapp 开发环境
- 一个支持的 Java distribution。支持的版本在 快速搭建 CorDapp 开发环境 中有说明
As root/sys admin user - add a system user which will be used to run Corda:
作为 root/sys 管理员用户 - 添加一个被用来运行 Corda 的系统用户
sudo adduser --system --no-create-home --group corda
Create a directory called
/opt/corda
and change its ownership to the user you want to use to run Corda:创建一个名为
/opt/corda
的路径然后将它的 ownership 变为要运行 Corda 的那个用户mkdir /opt/corda; chown corda:corda /opt/corda
Download the Corda jar (under
/4.1-RC01/corda-4.1-RC01.jar
) and place it in/opt/corda
下载 Corda jar (在
/4.1-RC01/corda-4.1-RC01.jar
下)并且把它放在/opt/corda
里(Optional) Download the Corda webserver jar (under
/4.1-RC01/corda-4.1-RC01.jar
) and place it in/opt/corda
(可选步骤)下载 Corda webserver jar <http://r3.bintray.com/corda/net/corda/corda-webserver/>`_(在 `/4.1-RC01/corda-4.1-RC01.jar`` 下)并且把它放在
/opt/corda
里Create a directory called
cordapps
in/opt/corda
and save your CorDapp jar file to it. Alternatively, download one of our sample CorDapps to thecordapps
directory在
/opt/corda
里创建一个名为cordapps
的路径,并且将你的 CorDapp jar 文件放到里边。你也可以下载我们的 CorDapps 样例 到cordapps
路径下Save the below as
/opt/corda/node.conf
. See 节点的配置 for a description of these options:将以下的
node.conf
保存到/opt/corda/node.conf
。查看 节点的配置 了解这些配置项的描述:p2pAddress = "example.com:10002" rpcSettings { address: "example.com:10003" adminAddress: "example.com:10004" } h2port = 11000 emailAddress = "you@example.com" myLegalName = "O=Bank of Breakfast Tea, L=London, C=GB" keyStorePassword = "cordacadevpass" trustStorePassword = "trustpass" devMode = false rpcUsers= [ { user=corda password=portal_password permissions=[ ALL ] } ] custom { jvmArgs = [ '-Xmx2048m', '-XX:+UseG1GC' ] }
Make the following changes to
/opt/corda/node.conf
:- Change the
p2pAddress
,rpcSettings.address
andrpcSettings.adminAddress
values to match your server’s hostname or external IP address. These are the addresses other nodes or RPC interfaces will use to communicate with your node. - Change the ports if necessary, for example if you are running multiple nodes on one server (see below).
- Enter an email address which will be used as an administrative contact during the registration process. This is only visible to the permissioning service.
- Enter your node’s desired legal name (see 节点的命名 for more details).
- If required, add RPC users
- Change the
- 对
/opt/corda/node.conf
进行下边的修改- 将
p2pAddress
、rpcSettings.address
和rpcSettings.adminAddress
的值修改为以你的 server hostname 或者外部的 IP address 开始。这个地址会被其他的节点或 RPC 接口用来和你的节点进行沟通 - 如果需要的话改变端口号,比如你在同一个 server 上运行了多个节点
- 输入一个 emial address,会在注册的流程中作为管理员的联系方式。这个只有 permissioning service 能够看到
- 输入你的节点期望的 legal name(查看 节点的命名 了解更多信息)。
- 如何需要的话,添加 RPC 用户
- 将
注解
Ubuntu 16.04 and most current Linux distributions use SystemD, so if you are running one of these distributions follow the steps marked SystemD. If you are running Ubuntu 14.04, follow the instructions for Upstart.
注解
Ubuntu 16.04 以及大多数当前的 Linux distributions 使用 SystemD,所以如果你在运行着这些 distributions 中的一个,那你需要按照下边标记为 SystemD 的步骤。如果你运行的是 Ubuntu 14.04,那么按照下边的标记为 Upstart 的步骤。
SystemD: Create a
corda.service
file based on the example below and save it in the/etc/systemd/system/
directorySystemD:根据下边的例子创建一个
corda.service
文件,并且将它保存在`/etc/systemd/system/
路径下[Unit] Description=Corda Node - Bank of Breakfast Tea Requires=network.target [Service] Type=simple User=corda WorkingDirectory=/opt/corda ExecStart=/usr/bin/java -jar /opt/corda/corda.jar Restart=on-failure [Install] WantedBy=multi-user.target
Upstart: Create a
corda.conf
file based on the example below and save it in the/etc/init/
directoryUpstart:根据下边的例子创建一个
corda.conf
文件,并且将它保存在 /etc/init/` 路径下description "Corda Node - Bank of Breakfast Tea" start on runlevel [2345] stop on runlevel [!2345] respawn setuid corda chdir /opt/corda exec java -jar /opt/corda/corda.jar
Make the following changes to
corda.service
orcorda.conf
:Make sure the service description is informative - particularly if you plan to run multiple nodes.
Change the username to the user account you want to use to run Corda. We recommend that this user account is not root
SystemD: Make sure the
corda.service
file is owned by root with the correct permissions:sudo chown root:root /etc/systemd/system/corda.service
sudo chmod 644 /etc/systemd/system/corda.service
Upstart: Make sure the
corda.conf
file is owned by root with the correct permissions:sudo chown root:root /etc/init/corda.conf
sudo chmod 644 /etc/init/corda.conf
按照下边修改
corda.service
或者corda.conf
:确保 service 描述是有意义的 - 特别是你想要运行多个节点的时候
将 username 修改成你想要用来运行 Corda 的用户账户。我们建议这个用户账号不是 root
**SystemD*:确保
corda.service
文件是 root 所有并且有正确的权限:sudo chown root:root /etc/systemd/system/corda.service
sudo chmod 644 /etc/systemd/system/corda.service
Upstart:确保
corda.conf
是被 root 所有并且有正确的权限:sudo chown root:root /etc/init/corda.conf
sudo chmod 644 /etc/init/corda.conf
注解
The Corda webserver provides a simple interface for interacting with your installed CorDapps in a browser. Running the webserver is optional.
注解
Corda webserver 提供了一个在浏览器中能够跟你安装的 CorDapps 进行互动的简单接口。运行 webserver 不是必须的。
SystemD: Create a
corda-webserver.service
file based on the example below and save it in the/etc/systemd/system/
directorySystemD:根据下边的例子创建一个
corda-webserver.service
文件并把它存在/etc/systemd/system/
路径下[Unit] Description=Webserver for Corda Node - Bank of Breakfast Tea Requires=network.target [Service] Type=simple User=corda WorkingDirectory=/opt/corda ExecStart=/usr/bin/java -jar /opt/corda/corda-webserver.jar Restart=on-failure [Install] WantedBy=multi-user.target
Upstart: Create a
corda-webserver.conf
file based on the example below and save it in the/etc/init/
directoryUpstart:基于下边的例子创建一个
corda-webserver.conf
的文件并将它放在/etc/init/
路径下description "Webserver for Corda Node - Bank of Breakfast Tea" start on runlevel [2345] stop on runlevel [!2345] respawn setuid corda chdir /opt/corda exec java -jar /opt/corda/corda-webserver.jar
Provision the required certificates to your node. Contact the network permissioning service or see Network certificates
为你的节点生成证书。联系 network permissioning service 或者查看 Network certificates
SystemD: You can now start a node and its webserver and set the services to start on boot by running the following
systemctl
commands:SystemD:现在你就可以启动一个节点和它的 webserver,通过运行下边的
systemctl
命令来将 service 设置为同系统启动一起运行:
sudo systemctl daemon-reload
sudo systemctl enable --now corda
sudo systemctl enable --now corda-webserver
- Upstart: You can now start a node and its webserver by running the following commands:
Upstart:现在你就可以通过运行下边的命令启动一个节点和它的 webserver:
sudo start corda
sudo start corda-webserver
The Upstart configuration files created above tell Upstart to start the Corda services on boot so there is no need to explicitly enable them.
上边创建的 Upstart 配置文件会告诉 Upstart 在 server 重启的时候要运行 Corda services,所以这里不需要显式地开启他们。
You can run multiple nodes by creating multiple directories and Corda services, modifying the node.conf
and
SystemD or Upstart configuration files so they are unique.
你可以通过创建多个路径和 Corda services 来运行多个节点,修改 node.conf
和 SystemD 或者 Upstart 配置文件,这样他们就都是唯一的了。
Windows:作为 Windows service 来安装和运行 Corda¶
We recommend running Corda as a Windows service. This provides service handling, ensures the Corda service is run at boot, and means the Corda service stays running with no users connected to the server.
我们建议将 Corda 作为一个 Windows service 来运行。这提供了 service handling,确保了 Corda 能够在系统重启后自动运行,这意味着我们不需要有人去连接到 server, Corda 就能够始终保持运行状态。
Prerequisites:
- A supported Java distribution. The supported versions are listed in 快速搭建 CorDapp 开发环境
- 一个支持的 Java destribution。支持的版本在 快速搭建 CorDapp 开发环境 中有说明
Create a Corda directory and download the Corda jar. Here’s an example using PowerShell:
创建一个 Corda 目录,然后下载 Corda jar。下边是一个使用 powershell 的一个例子
mkdir C:Corda wget http://jcenter.bintray.com/net/corda/corda/4.1-RC01/corda-4.1-RC01.jar -OutFile C:Cordacorda.jar
Create a directory called
cordapps
inC:\Corda\
and save your CorDapp jar file to it. Alternatively, download one of our sample CorDapps to thecordapps
directory在
C:\Corda\
下创建一个名为cordapps
的目录,然后将你的 CorDapp jar 文件存储到这里。或者也可以从我们的 CorDapps 样例 中下载一个放到cordapps
目录下。Save the below as
C:\Corda\node.conf
. See 节点的配置 for a description of these options:把下边的内容存储为
C:\Corda\node.conf
。查看 节点的配置 来了解这些选项的详细介绍:p2pAddress = "example.com:10002" rpcSettings { address = "example.com:10003" adminAddress = "example.com:10004" } h2port = 11000 emailAddress = "you@example.com" myLegalName = "O=Bank of Breakfast Tea, L=London, C=GB" keyStorePassword = "cordacadevpass" trustStorePassword = "trustpass" devMode = false rpcSettings { useSsl = false standAloneBroker = false address = "example.com:10003" adminAddress = "example.com:10004" } custom { jvmArgs = [ '-Xmx2048m', '-XX:+UseG1GC' ] }
Make the following changes to
C:\Corda\node.conf
:- Change the
p2pAddress
,rpcSettings.address
andrpcSettings.adminAddress
values to match your server’s hostname or external IP address. These are the addresses other nodes or RPC interfaces will use to communicate with your node. - Change the ports if necessary, for example if you are running multiple nodes on one server (see below).
- Enter an email address which will be used as an administrative contact during the registration process. This is only visible to the permissioning service.
- Enter your node’s desired legal name (see 节点的命名 for more details).
- If required, add RPC users
- Change the
对
C:\Corda\node.conf
做以下的修改:- 将
p2pAddress
、rpcSettings.address
和rpcSettings.adminAddress
的值修改为以你的 server hostname 或者外部的 IP address 开始。这个地址会被其他的节点或 RPC 接口用来和你的节点进行沟通 - 如果需要的话改变端口号,比如你在同一个 server 上运行了多个节点
- 输入一个 emial address,会在注册的流程中作为管理员的联系方式。这个只有 permissioning service 能够看到
- 输入你的节点期望的 legal name(查看 节点的命名 了解更多信息)。
- 如何需要的话,添加 RPC 用户
- 将
Copy the required Java keystores to the node. See Network certificates 将要求的 Java keystores 拷贝到节点。查看 Network certificates
Download the NSSM service manager 下载 NSSM service manager
Unzip
nssm-2.24\win64\nssm.exe
toC:\Corda
Upzipnssm-2.24\win64\nssm.exe
到C:\Corda
Save the following as
C:\Corda\nssm.bat
: 将下边的代码存储为C:\Corda\nssm.bat
:nssm install cordanode1 C:\ProgramData\Oracle\Java\javapath\java.exe nssm set cordanode1 AppDirectory C:\Corda nssm set cordanode1 AppStdout C:\Corda\service.log nssm set cordanode1 AppStderr C:\Corda\service.log nssm set cordanode1 Description Corda Node - Bank of Breakfast Tea nssm set cordanode1 Start SERVICE_AUTO_START sc start cordanode1
Modify the batch file:
- If you are installing multiple nodes, use a different service name (
cordanode1
) for each node - Set an informative description
- If you are installing multiple nodes, use a different service name (
修改这个 batch 文件:
- 如果你安装了多个节点,对每个节点要使用不同的 service name(
cordanode1
) - 设置一个有意义的描述
- 如果你安装了多个节点,对每个节点要使用不同的 service name(
Provision the required certificates to your node. Contact the network permissioning service or see Network certificates
为你的节点生成证书。联系网络权限服务或者查看 Network certificates
Run the batch file by clicking on it or from a command prompt
双击或者从命令行运行这个 batch file
- Run
services.msc
and verify that a service calledcordanode1
is present and running
运行services.msc
并确认一个名为cordanode1
的 service 显示并运行着
- Run
netstat -ano
and check for the ports you configured innode.conf
运行
netstat -ano
并确认你在node.conf
中设置的端口是否在运行
- You may need to open the ports on the Windows firewall
- 你可能需要在防火墙中打开这个端口
测试你的安装¶
You can verify Corda is running by connecting to your RPC port from another host, e.g.:
你可以通过另外的 host 来链接到你的 RPC 端口来确认 Corda 是否在运行:
telnet your-hostname.example.com 10002
If you receive the message “Escape character is ^]”, Corda is running and accessible. Press Ctrl-] and Ctrl-D to exit telnet.
如果你收到的消息是 “Escape character is ^]”,Corda 已经在运行并且可以访问了。按 Ctrl-] 和 Ctrl-D 退出 telnet。
节点数据库¶
配置节点数据库¶
H2¶
By default, nodes store their data in an H2 database. See 访问 H2 数据库.
默认的,节点会将他们的数据存储在 H2 数据库中。查看 访问 H2 数据库。
Nodes can also be configured to use PostgreSQL and SQL Server. However, these are experimental community contributions. The Corda continuous integration pipeline does not run unit tests or integration tests of these databases.
节点也可以被配置用来使用 PostgreSQL 和 SQL Server。然而,这些还都是由社区贡献出来的处于试验的阶段。Corda 的持续集成 pipeline 还没有针对这些数据库运行单元测试和集成测试。
PostgreSQL¶
Nodes can also be configured to use PostgreSQL 9.6, using PostgreSQL JDBC Driver 42.1.4. Here is an example node configuration for PostgreSQL:
节点也可以配置来使用 PostgreSQL 9.6,使用 PostgreSQL JDBC Driver 42.1.4。下边是配置使用 PostgreSQL 的例子:
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
dataSource.url = "jdbc:postgresql://[HOST]:[PORT]/[DATABASE]"
dataSource.user = [USER]
dataSource.password = [PASSWORD]
}
database = {
transactionIsolationLevel = READ_COMMITTED
}
Note that:
- Database schema name can be set in JDBC URL string e.g. currentSchema=my_schema
- Database schema name must either match the
dataSource.user
value to end up on the standard schema search path according to the PostgreSQL documentation, or the schema search path must be set explicitly for the user. - If your PostgresSQL database is hosting multiple schema instances (using the JDBC URL currentSchema=my_schema) for different Corda nodes, you will need to create a hibernate_sequence sequence object manually for each subsequent schema added after the first instance. Corda doesn’t provision Hibernate with a schema namespace setting and a sequence object may be not created. Run the DDL statement and replace my_schema with your schema namespace:
需要注意:
数据库 schema 名字可以在 JDBC URL 字符中被设置为 currentSchema=my_schema
数据库 schema 名字必须根据 PostgreSQL 文档,在标准的 schema 检索路径以 ``dataSource.user``结尾,或者这个 schema 检索的路径必须为用户显式地配置。
如果你的 PostgresSQL 数据库为了不同的 Corda 节点存储了多个 schema 实例(使用 JDBC URL currentSchema=my_schema),那么你需要为在第一个实例后边的这些 schema 手动地创建一个 hibernate_sequence 有序对象。Corda 并没有初始带有一个 schema 命名空间设置的 Hibernate,所以一个有序对象可能不会被创建。运行 DDL 语句并且将 my_schema 替换成你的 schema 命名空间:
CREATE SEQUENCE my_schema.hibernate_sequence INCREMENT BY 1 MINVALUE 1 MAXVALUE 9223372036854775807 START 8 CACHE 1 NO CYCLE;
SQLServer¶
Nodes also have untested support for Microsoft SQL Server 2017, using Microsoft JDBC Driver 6.2 for SQL Server. Here is an example node configuration for SQLServer:
节点也可以支持未测试过的 Microsoft SQL Server 2017,使用 Microsoft JDBC Driver 6.2 for SQL Server,下边是一个对于 SQL Server 的节点配置:
dataSourceProperties = {
dataSourceClassName = "com.microsoft.sqlserver.jdbc.SQLServerDataSource"
dataSource.url = "jdbc:sqlserver://[HOST]:[PORT];databaseName=[DATABASE_NAME]"
dataSource.user = [USER]
dataSource.password = [PASSWORD]
}
database = {
transactionIsolationLevel = READ_COMMITTED
}
jarDirs = ["[FULL_PATH]/sqljdbc_6.2/enu/"]
Note that:
- Ensure the directory referenced by jarDirs contains only one JDBC driver JAR file; by the default, sqljdbc_6.2/enu/contains two JDBC JAR files for different Java versions.
- 确认 jarDirs 引用的路径里仅仅包含一个 JDBC driver JAR 文件;默认的 sqljdbc_6.2/enu/ 包含了两个针对不同的 Java 版本的 JDBC JAR 文件。
节点数据库表¶
By default, the node database has the following tables:
默认的,节点的数据库包含以下表:
Table name | Columns |
---|---|
DATABASECHANGELOG | ID, AUTHOR, FILENAME, DATEEXECUTED, ORDEREXECUTED, EXECTYPE, MD5SUM, DESCRIPTION, COMMENTS, TAG, LIQUIBASE, CONTEXTS, LABELS, DEPLOYMENT_ID |
DATABASECHANGELOGLOCK | ID, LOCKED, LOCKGRANTED, LOCKEDBY |
NODE_ATTACHMENTS | ATT_ID, CONTENT, FILENAME, INSERTION_DATE, UPLOADER |
NODE_ATTACHMENTS_CONTRACTS | ATT_ID, CONTRACT_CLASS_NAME |
NODE_ATTACHMENTS_SIGNERS | ATT_ID, SIGNER |
NODE_CHECKPOINTS | CHECKPOINT_ID, CHECKPOINT_VALUE |
NODE_CONTRACT_UPGRADES | STATE_REF, CONTRACT_CLASS_NAME |
NODE_IDENTITIES | PK_HASH, IDENTITY_VALUE |
NODE_INFOS | NODE_INFO_ID, NODE_INFO_HASH, PLATFORM_VERSION, SERIAL |
NODE_INFO_HOSTS | HOST_NAME, PORT, NODE_INFO_ID, HOSTS_ID |
NODE_INFO_PARTY_CERT | PARTY_NAME, ISMAIN, OWNING_KEY_HASH, PARTY_CERT_BINARY |
NODE_LINK_NODEINFO_PARTY | NODE_INFO_ID, PARTY_NAME |
NODE_MESSAGE_IDS | MESSAGE_ID, INSERTION_TIME, SENDER, SEQUENCE_NUMBER |
NODE_NAMED_IDENTITIES | NAME, PK_HASH |
NODE_NETWORK_PARAMETERS | HASH, EPOCH, PARAMETERS_BYTES, SIGNATURE_BYTES, CERT, PARENT_CERT_PATH |
NODE_OUR_KEY_PAIRS | PUBLIC_KEY_HASH, PRIVATE_KEY, PUBLIC_KEY |
NODE_PROPERTIES | PROPERTY_KEY, PROPERTY_VALUE |
NODE_SCHEDULED_STATES | OUTPUT_INDEX, TRANSACTION_ID, SCHEDULED_AT |
NODE_TRANSACTIONS | TX_ID, TRANSACTION_VALUE, STATE_MACHINE_RUN_ID |
PK_HASH_TO_EXT_ID_MAP | ID, EXTERNAL_ID, PUBLIC_KEY_HASH |
STATE_PARTY | OUTPUT_INDEX, TRANSACTION_ID, ID, PUBLIC_KEY_HASH, X500_NAME |
VAULT_FUNGIBLE_STATES | OUTPUT_INDEX, TRANSACTION_ID, ISSUER_NAME, ISSUER_REF, OWNER_NAME, QUANTITY |
VAULT_FUNGIBLE_STATES_PARTS | OUTPUT_INDEX, TRANSACTION_ID, PARTICIPANTS |
VAULT_LINEAR_STATES | OUTPUT_INDEX, TRANSACTION_ID, EXTERNAL_ID, UUID |
VAULT_LINEAR_STATES_PARTS | OUTPUT_INDEX, TRANSACTION_ID, PARTICIPANTS |
VAULT_STATES | OUTPUT_INDEX, TRANSACTION_ID, CONSUMED_TIMESTAMP, CONTRACT_STATE_CLASS_NAME, LOCK_ID, LOCK_TIMESTAMP, NOTARY_NAME, RECORDED_TIMESTAMP, STATE_STATUS, RELEVANCY_STATUS, CONSTRAINT_TYPE, CONSTRAINT_DATA |
VAULT_TRANSACTION_NOTES | SEQ_NO, NOTE, TRANSACTION_ID |
V_PKEY_HASH_EX_ID_MAP | ID, PUBLIC_KEY_HASH, TRANSACTION_ID, OUTPUT_INDEX, EXTERNAL_ID |
数据库连接池¶
Corda uses Hikari Pool for creating the connection pool. To configure the connection pool any custom properties can be set in the dataSourceProperties section.
Corda 使用 Hikari Pool 来创建连接池。要配置连接池,可以在 dataSourceProperties 部分配置任何的自定义属性。
For example:
dataSourceProperties = {
dataSourceClassName = "org.postgresql.ds.PGSimpleDataSource"
...
maximumPoolSize = 10
connectionTimeout = 50000
}
访问 H2 数据库¶
目录
配置用户名和密码¶
The database (a file called persistence.mv.db
) is created when the node first starts up. By default, it has an
administrator user sa
and a blank password. The node requires the user with administrator permissions in order to
creates tables upon the first startup or after deploying new CorDapps with their own tables. The database password is
required only when the H2 database is exposed on non-localhost address (which is disabled by default).
当节点第一次启动的时候,数据库(一个名字为 persistence.mv.db
的文件)会被创建。默认的它会有一个 sa
用户和一个空密码。节点需要这个用户有管理员权限,这样才能够在第一次启动或者是在部署带有他们自己的 tables 的新的 CorDapps 之后创建这些表。数据库的密码只有在 H2 数据库暴露在非本地的地址的时候才是必须要有值的(默认的是被 disable 的)。
This username and password can be changed in node configuration:
用户名和密码可以在节点配置中进行改动:
dataSourceProperties = { dataSource.user = [USER] dataSource.password = [PASSWORD] }
Note that changing the user/password for the existing node in node.conf
will not update them in the H2 database.
You need to log into the database first to create a new user or change a user’s password.
注意,对于已经存在的节点,在 node.conf
中改变用户名和密码是不会在 H2 数据库中改变他们的。你需要先登录到数据库,创建一个新的用户,或者改变用户的密码。
在一个运行的节点上通过一个 socket 连接¶
配置端口¶
Nodes backed by an H2 database will not expose this database by default. To configure the node to expose its internal
database over a socket which can be browsed using any tool that can use JDBC drivers, you must specify the full network
address (interface and port) using the h2Settings
syntax in the node configuration.
使用 H2 数据库的节点默认是不会暴露这个数据库的。为了配置节点通过一个 socket 暴露它的内部的数据库,以便用任何能够使用 JDBC Driver 的工具浏览,你必须要在节点的配置中使用 h2Settings
语法指定完整的网络地址(接口和端口)
The configuration below will restrict the H2 service to run on localhost
:
下边的配置将会限制 H2 服务运行在 localhost
上:
h2Settings {
address: "localhost:12345"
}
If you want H2 to auto-select a port (mimicking the old h2Port
behaviour), you can use:
如果你希望 H2 自动选择一个端口(模仿旧的 h2Port
行为),你可以使用:
h2Settings {
address: "localhost:0"
}
If remote access is required, the address can be changed to 0.0.0.0
to listen on all interfaces. A password must be
set for the database user before doing so.
如果需要远程访问,地址可以被改成 0.0.0.0
来监听所有的接口。在做这个之前,一个密码必须要为数据库用户设置好。
h2Settings {
address: "0.0.0.0:12345"
}
dataSourceProperties {
dataSource.password : "strongpassword"
}
注解
The previous h2Port
syntax is now deprecated. h2Port
will continue to work but the database will only
be accessible on localhost.
注解
以前的 h2Port
语法已经废弃了。h2Port
还会继续工作,但是数据库仅仅可以从 localhost 访问了。
连接到数据库¶
The JDBC URL is printed during node startup to the log and will typically look like this:
JDBC URL 会在节点启动的时候被打印到 log,并且通常会像下边这样:
jdbc:h2:tcp://localhost:31339/node
Any database browsing tool that supports JDBC can be used.
任何支持 JDBC 的数据库浏览工具都能够用来浏览数据库。
通过 H2 Console 连接¶
- Download the last stable h2 platform-independent zip, unzip the zip, and navigate in a terminal window to the unzipped folder 下载 最新稳定版本 h2 platform-independent zip,解压 zip,在一个 terminal 窗口浏览至解压的文件夹
- Change directories to the bin folder:
cd h2/bin
将路径改变到 bin 文件夹:cd h2/bin
- Run the following command to open the h2 web console in a web browser tab:
运行下边的命令在一个 web 浏览器的 tab 里打开 h2 web console:
- Unix:
sh h2.sh
- Windows:
h2.bat
- Unix:
- Paste the node’s JDBC URL into the JDBC URL field and click
Connect
, using the default username (sa
) and no password (unless configured otherwise) 将节点的 JDBC URL 粘贴到 JDBC URL 字段并且点击Connect
,使用默认的用户名(sa
)并且不需要密码(除非你配置了密码)
You will be presented with a web interface that shows the contents of your node’s storage and vault, and provides an interface for you to query them using SQL.
你会看到一个 web 接口,显示了你的节点的存储和 vault 的内容,并且提供给你一个接口来使用 SQL 查询他们。
直接连接到节点的 persistence.mv.db
文件¶
You can also use the H2 Console to connect directly to the node’s persistence.mv.db
file. Ensure the node is off
before doing so, as access to the database file requires exclusive access. If the node is still running, the H2 Console
will return the following error:
你也可以使用 H2 Console 直接连到节点的 persistence.mv.db
文件。确保在做这个之前节点是关闭的,因为访问数据库需要一个独占的访问。如果节点还是在运行的话,H2 console 会返回下边的错误:
Database may be already in use: null. Possible solutions: close all other connection(s); use the server mode [90020-196]
.
jdbc:h2:~/path/to/file/persistence
Node shell¶
目录
The Corda shell is an embedded or standalone command line that allows an administrator to control and monitor a node. It is based on the CRaSH shell and supports many of the same features. These features include:
- Invoking any of the node’s RPC methods
- Viewing a dashboard of threads, heap usage, VM properties
- Uploading and downloading attachments
- Issuing SQL queries to the underlying database
- Viewing JMX metrics and monitoring exports
- UNIX style pipes for both text and objects, an
egrep
command and a command for working with columnar data - Shutting the node down.
Permissions¶
When accessing the shell (embedded, standalone, via SSH) RPC permissions are required. This is because the shell actually communicates with the node using RPC calls.
- Watching flows (
flow watch
) requiresInvokeRpc.stateMachinesFeed
. - Starting flows requires
InvokeRpc.startTrackedFlowDynamic
,InvokeRpc.registeredFlows
andInvokeRpc.wellKnownPartyFromX500Name
, as well as a permission for the flow being started. - Killing flows (
flow kill
) requiresInvokeRpc.killFlow
. This currently allows the user to kill any flow, so please be careful when granting it!
The shell via the local terminal¶
注解
Local terminal shell works only in development mode!
The shell will display in the node’s terminal window. It connects to the node as ‘shell’ user with password ‘shell’
(which is only available in dev mode).
It may be disabled by passing the --no-local-shell
flag when running the node.
The shell via SSH¶
The shell is also accessible via SSH.
Enabling SSH access¶
By default, the SSH server is disabled. To enable it, a port must be configured in the node’s node.conf
file:
sshd {
port = 2222
}
Authentication¶
Users log in to shell via SSH using the same credentials as for RPC. No RPC permissions are required to allow the connection and log in.
The host key is loaded from the <node root directory>/sshkey/hostkey.pem
file. If this file does not exist, it is
generated automatically. In development mode, the seed may be specified to give the same results on the same computer
in order to avoid host-checking errors.
Connecting to the shell¶
Linux and MacOS¶
Run the following command from the terminal:
ssh -p [portNumber] [host] -l [user]
Where:
[portNumber]
is the port number specified in thenode.conf
file[host]
is the node’s host (e.g.localhost
if running the node locally)[user]
is the RPC username
The RPC password will be requested after a connection is established.
注解
In development mode, restarting a node frequently may cause the host key to be regenerated. SSH usually saves
trusted hosts and will refuse to connect in case of a change. This check can be disabled using the
-o StrictHostKeyChecking=no
flag. This option should never be used in production environment!
The standalone shell¶
The standalone shell is a standalone application interacting with a Corda node via RPC calls. RPC node permissions are necessary for authentication and authorisation. Certain operations, such as starting flows, require access to CordApps jars.
Starting the standalone shell¶
Run the following command from the terminal:
corda-shell [-hvV] [--logging-level=<loggingLevel>] [--password=<password>]
[--sshd-hostkey-directory=<sshdHostKeyDirectory>]
[--sshd-port=<sshdPort>] [--truststore-file=<trustStoreFile>]
[--truststore-password=<trustStorePassword>]
[--truststore-type=<trustStoreType>] [--user=<user>] [-a=<host>]
[-c=<cordappDirectory>] [-f=<configFile>] [-o=<commandsDirectory>]
[-p=<port>] [COMMAND]
Where:
--config-file=<configFile>
,--f
The path to the shell configuration file, used instead of providing the rest of the command line options.--cordapp-directory=<cordappDirectory>
,-c
The path to the directory containing CorDapp jars, CorDapps are required when starting flows.--commands-directory=<commandsDirectory>
,-o
The path to the directory containing additional CRaSH shell commands.--host
,-a
: The host address of the Corda node.--port
,-p
: The RPC port of the Corda node.--user=<user>
: The RPC user name.--password=<password>
The RPC user password. If not provided it will be prompted for on startup.--sshd-port=<sshdPort>
Enables SSH server for shell.--sshd-hostkey-directory=<sshHostKeyDirectory
: The directory containing the hostkey.pem file for the SSH server.--truststore-password=<trustStorePassword>
: The password to unlock the TrustStore file.--truststore-file=<trustStoreFile>
: The path to the TrustStore file.--truststore-type=<trustStoreType>
: The type of the TrustStore (e.g. JKS).--verbose
,--log-to-console
,-v
: If set, prints logging to the console as well as to a file.--logging-level=<loggingLevel>
: Enable logging at this level and higher. Possible values: ERROR, WARN, INFO, DEBUG, TRACE. Default: INFO.--help
,-h
: Show this help message and exit.--version
,-V
: Print version information and exit.
Additionally, the install-shell-extensions
subcommand can be used to install the corda-shell
alias and auto completion for bash and zsh. See Shell extensions for CLI Applications for more info.
The format of config-file
:
node {
addresses {
rpc {
host : "localhost"
port : 10006
}
}
}
shell {
workDir : /path/to/dir
}
extensions {
cordapps {
path : /path/to/cordapps/dir
}
sshd {
enabled : "false"
port : 2223
}
}
ssl {
keystore {
path: "/path/to/keystore"
type: "JKS"
password: password
}
trustore {
path: "/path/to/trusttore"
type: "JKS"
password: password
}
}
user : demo
password : demo
Standalone Shell via SSH¶
The standalone shell can embed an SSH server which redirects interactions via RPC calls to the Corda node.
To run SSH server use --sshd-port
option when starting standalone shell or extensions.sshd
entry in the configuration file.
For connection to SSH refer to Connecting to the shell.
Certain operations (like starting Flows) will require Shell’s --cordpass-directory
to be configured correctly (see Starting the standalone shell).
Interacting with the node via the shell¶
The shell interacts with the node by issuing RPCs (remote procedure calls). You make an RPC from the shell by typing
run
followed by the name of the desired RPC method. For example, you’d see a list of the registered flows on your
node by running:
run registeredFlows
Some RPCs return a stream of events that will be shown on screen until you press Ctrl-C.
You can find a list of the available RPC methods here.
Shutting down the node¶
You can shut the node down via shell:
gracefulShutdown
will put node into draining mode, and shut down when there are no flows runningshutdown
will shut the node down immediately
Output Formats¶
You can choose the format in which the output of the commands will be shown.
To see what is the format that’s currently used, you can type output-format get
.
To update the format, you can type output-format set json
.
The currently supported formats are json
, yaml
. The default format is yaml
.
Flow commands¶
The shell also has special commands for working with flows:
flow list
lists the flows available on the nodeflow watch
shows all the flows currently running on the node with result (or error) informationflow start
starts a flow. Theflow start
command takes the name of a flow class, or any unambiguous substring thereof, as well as the data to be passed to the flow constructor. If there are several matches for a given substring, the possible matches will be printed out. If a flow has multiple constructors then the names and types of the arguments will be used to try and automatically determine which one to use. If the match against available constructors is unclear, the reasons each available constructor failed to match will be printed out. In the case of an ambiguous match, the first applicable constructor will be usedflow kill
kills a single flow, as identified by its UUID.
Parameter syntax¶
Parameters are passed to RPC or flow commands using a syntax called Yaml (yet another markup language), a simple JSON-like language. The key features of Yaml are:
Parameters are separated by commas
Each parameter is specified as a
key: value
pair- There MUST to be a space after the colon, otherwise you’ll get a syntax error
Strings do not need to be surrounded by quotes unless they contain commas, colons or embedded quotes
Class names must be fully-qualified (e.g.
java.lang.String
)Nested classes are referenced using
$
. For example, thenet.corda.finance.contracts.asset.Cash.State
class is referenced asnet.corda.finance.contracts.asset.Cash$State
(note the$
)
注解
If your CorDapp is written in Java, named arguments won’t work unless you compiled the node using the
-parameters
argument to javac. See 创建本地节点 for how to specify it via Gradle.
Creating an instance of a class¶
Class instances are created using curly-bracket syntax. For example, if we have a Campaign
class with the following
constructor:
data class Campaign(val name: String, val target: Int)
Then we could create an instance of this class to pass as a parameter as follows:
newCampaign: { name: Roger, target: 1000 }
Where newCampaign
is a parameter of type Campaign
.
Mappings from strings to types¶
In addition to the types already supported by Jackson, several parameter types can automatically be mapped from strings. We cover the most common types here.
A parameter of type Amount<Currency>
can be written as either:
- A dollar ($), pound (£) or euro (€) symbol followed by the amount as a decimal
- The amount as a decimal followed by the ISO currency code (e.g. “100.12 CHF”)
A parameter of type SecureHash
can be written as a hexadecimal string: F69A7626ACC27042FEEAE187E6BFF4CE666E6F318DC2B32BE9FAF87DF687930C
A parameter of type OpaqueBytes
can be provided as a UTF-8 string.
A parameter of type PublicKey
can be written as a Base58 string of its encoded format: GfHq2tTVk9z4eXgyQXzegw6wNsZfHcDhfw8oTt6fCHySFGp3g7XHPAyc2o6D
.
net.corda.core.utilities.EncodingUtils.toBase58String
will convert a PublicKey
to this string format.
A parameter of type Party
can be written in several ways:
- By using the full name:
"O=Monogram Bank,L=Sao Paulo,C=GB"
- By specifying the organisation name only:
"Monogram Bank"
- By specifying any other non-ambiguous part of the name:
"Sao Paulo"
(if only one network node is located in Sao Paulo) - By specifying the public key (see above)
A parameter of type NodeInfo
can be written in terms of one of its identities (see Party
above)
A parameter of type AnonymousParty
can be written in terms of its PublicKey
(see above)
A parameter of type NetworkHostAndPort
can be written as a “host:port” string: "localhost:1010"
A parameter of Instant
and Date
can be written as an ISO-8601 string: "2017-12-22T00:00:00Z"
Examples¶
We would start the CashIssueFlow
flow as follows:
flow start CashIssueFlow amount: $1000, issuerBankPartyRef: 1234, notary: "O=Controller, L=London, C=GB"
This breaks down as follows:
flow start
is a shell command for starting a flowCashIssueFlow
is the flow we want to start- Each
name: value
pair after that is a flow constructor argument
This command invokes the following CashIssueFlow
constructor:
class CashIssueFlow(val amount: Amount<Currency>,
val issuerBankPartyRef: OpaqueBytes,
val recipient: Party,
val notary: Party) : AbstractCashFlow(progressTracker)
We would query the vault for IOUState
states as follows:
run vaultQuery contractStateType: com.template.IOUState
This breaks down as follows:
run
is a shell command for making an RPC callvaultQuery
is the RPC call we want to makecontractStateType: com.template.IOUState
is the fully-qualified name of the state type we are querying for
Attachments¶
The shell can be used to upload and download attachments from the node. To learn more, see the tutorial “Using attachments”.
Getting help¶
You can type help
in the shell to list the available commands, and man
to get interactive help on many
commands. You can also pass the --help
or -h
flags to a command to get info about what switches it supports.
Commands may have subcommands, in the same style as git
. In that case, running the command by itself will
list the supported subcommands.
Extending the shell¶
The shell can be extended using commands written in either Java or Groovy (a Java-compatible scripting language). These commands have full access to the node’s internal APIs and thus can be used to achieve almost anything.
A full tutorial on how to write such commands is out of scope for this documentation. To learn more, please refer to
the CRaSH documentation. New commands are placed in the shell-commands
subdirectory in the node directory. Edits
to existing commands will be used automatically, but currently commands added after the node has started won’t be
automatically detected. Commands must have names all in lower-case with either a .java
or .groovy
extension.
警告
Commands written in Groovy ignore Java security checks, so have unrestricted access to node and JVM internals regardless of any sandboxing that may be in place. Don’t allow untrusted users to edit files in the shell-commands directory!
Limitations¶
The shell will be enhanced over time. The currently known limitations include:
- Flows cannot be run unless they override the progress tracker
- If a command requires an argument of an abstract type, the command cannot be run because the concrete subclass to use cannot be specified using the YAML syntax
- There is no command completion for flows or RPCs
- Command history is not preserved across restarts
- The
jdbc
command requires you to explicitly log into the database first - Commands placed in the
shell-commands
directory are only noticed after the node is restarted - The
jul
command advertises access to logs, but it doesn’t work with the logging framework we’re using
与节点互动¶
目录
概要¶
To interact with your node, you need to write a client in a JVM-compatible language using the CordaRPCClient class. This class allows you to connect to your node via a message queue protocol and provides a simple RPC interface for interacting with the node. You make calls on a JVM object as normal, and the marshalling back-and-forth is handled for you.
为了跟你的节点互动,你需要使用 JVM 兼容语言 和 CordaRPCClient 类来编写一个客户端。这个类会通过使用一个消息队列协议来连接你的节点并且提供一个简单的 RPC 接口来跟节点互动。你可以像通常那样去调用一个 Java 对象,然后来回的消息交互它会帮你控制。
警告
The built-in Corda webserver is deprecated and unsuitable for production use. If you want to interact with your node via HTTP, you will need to stand up your own webserver that connects to your node using the CordaRPCClient class. You can find an example of how to do this using the popular Spring Boot server here.
警告
内置的 Corda webserver 已经废弃并且不再适合生产环境使用。如果你想通过 HTTP 跟你的节点互动的话,你需要创建你自己的 webserver,使用 CordaRPCClient 类来连接到你的节点。你可以在 这里 找到如何使用流行的 Spring Boot server 的例子。
通过 RPC 连接到一个节点¶
To use CordaRPCClient, you must add net.corda:corda-rpc:$corda_release_version
as a cordaCompile
dependency
in your client’s build.gradle
file.
为了使用 CordaRPCClient,你必须要将 net.corda:corda-rpc:$corda_release_version
作为一个 cordaCompile
的依赖添加到你的客户端的 build.gradle
文件中。
CordaRPCClient has a start
method that takes the node’s RPC address and returns a CordaRPCConnection.
CordaRPCConnection has a proxy
method that takes an RPC username and password and returns a CordaRPCOps
object that you can use to interact with the node.
CordaRPCClient 具有一个 start
方法会需要节点的 RPC 地址并且会返回一个 CordaRPCConnection。CordaRPCConnection 具有一个 proxy
方法会需要一个 RPC 用户名和密码并且返回一个 CordaRPCOps 对象,你可以用它来跟你的节点互动。
Here is an example of using CordaRPCClient to connect to a node and log the current time on its internal clock:
下边是一个使用 CordaRPCClient 来连接到一个节点并且在它的内部时间上 log 当前的时间:
import net.corda.client.rpc.CordaRPCClient
import net.corda.core.utilities.NetworkHostAndPort.Companion.parse
import net.corda.core.utilities.loggerFor
import org.slf4j.Logger
class ClientRpcExample {
companion object {
val logger: Logger = loggerFor<ClientRpcExample>()
}
fun main(args: Array<String>) {
require(args.size == 3) { "Usage: TemplateClient <node address> <username> <password>" }
val nodeAddress = parse(args[0])
val username = args[1]
val password = args[2]
val client = CordaRPCClient(nodeAddress)
val connection = client.start(username, password)
val cordaRPCOperations = connection.proxy
logger.info(cordaRPCOperations.currentNodeTime().toString())
connection.notifyServerAndClose()
}
}
import net.corda.client.rpc.CordaRPCClient;
import net.corda.client.rpc.CordaRPCConnection;
import net.corda.core.messaging.CordaRPCOps;
import net.corda.core.utilities.NetworkHostAndPort;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
class ClientRpcExample {
private static final Logger logger = LoggerFactory.getLogger(ClientRpcExample.class);
public static void main(String[] args) {
if (args.length != 3) {
throw new IllegalArgumentException("Usage: TemplateClient <node address> <username> <password>");
}
final NetworkHostAndPort nodeAddress = NetworkHostAndPort.parse(args[0]);
String username = args[1];
String password = args[2];
final CordaRPCClient client = new CordaRPCClient(nodeAddress);
final CordaRPCConnection connection = client.start(username, password);
final CordaRPCOps cordaRPCOperations = connection.getProxy();
logger.info(cordaRPCOperations.currentNodeTime().toString());
connection.notifyServerAndClose();
}
}
警告
The returned CordaRPCConnection is somewhat expensive to create and consumes a small amount of
server side resources. When you’re done with it, call close
on it. Alternatively you may use the use
method on CordaRPCClient which cleans up automatically after the passed in lambda finishes. Don’t create
a new proxy for every call you make - reuse an existing one.
警告
返回的 CordaRPCConnection 会消耗你的服务器端的资源。当你用完它的时候,调用 close
。或者你可以使用 CordaRPCClient 的 use
方法,它会在传入 lambda 的方法结束后自动清理。不要为你的每次调用创建一个新的代理 - 重用一个已经存在的。
For further information on using the RPC API, see Using the client RPC API.
关于使用 RPC API 的更多信息,查看 see Using the client RPC API。
RPC 权限¶
For a node’s owner to interact with their node via RPC, they must define one or more RPC users. Each user is authenticated with a username and password, and is assigned a set of permissions that control which RPC operations they can perform. Permissions are not required to interact with the node via the shell, unless the shell is being accessed via SSH.
如果一个节点的 owner 想跟它的节点通过 RPC 来互动的话(比如读取节点 storage 中的内容),他必须要定义一个或多个 RPC 用户。每个用户会通过一个用户名和密码来进行验证,还会被赋予一系列的RPC 能够使用的权限。使用 shell 来跟节点互动的时候是不需要 RPC 权限的,除非 shell 是通过 SSH 来访问的。
RPC users are created by adding them to the rpcUsers
list in the node’s node.conf
file:
RPC 用户信息会被添加到节点的 node.conf
文件中的 rpcUsers
列表中:
rpcUsers=[
{
username=exampleUser
password=examplePass
permissions=[]
},
...
]
By default, RPC users are not permissioned to perform any RPC operations.
默认的,RPC 用户不允许执行任何的 RPC 操作。
赋予 flow 权限¶
You provide an RPC user with the permission to start a specific flow using the syntax
StartFlow.<fully qualified flow name>
:
使用 StartFlow.<fully qualified flow name>
来给一个 RPC 用户提供开始某个指定的 flow 的权限:
rpcUsers=[
{
username=exampleUser
password=examplePass
permissions=[
"StartFlow.net.corda.flows.ExampleFlow1",
"StartFlow.net.corda.flows.ExampleFlow2"
]
},
...
]
You can also provide an RPC user with the permission to start any flow using the syntax
InvokeRpc.startFlow
:
你也可以使用 InvokeRpc.startFlow
来给 RPC 用户提供启动任何 flow 的权限:
rpcUsers=[
{
username=exampleUser
password=examplePass
permissions=[
"InvokeRpc.startFlow"
]
},
...
]
赋予其他的 RPC 权限¶
You provide an RPC user with the permission to perform a specific RPC operation using the syntax
InvokeRpc.<rpc method name>
:
可以使用 InvokeRpc.<rpc method name>
来给 RPC 用户分配执行一个指定的 RPC 操作的权限:
rpcUsers=[
{
username=exampleUser
password=examplePass
permissions=[
"InvokeRpc.nodeInfo",
"InvokeRpc.networkMapSnapshot"
]
},
...
]
RPC 安全管理¶
Setting rpcUsers
provides a simple way of granting RPC permissions to a fixed set of users, but has some
obvious shortcomings. To support use cases aiming for higher security and flexibility, Corda offers additional security
features such as:
- Fetching users credentials and permissions from an external data source (e.g.: a remote RDBMS), with optional in-memory caching. In particular, this allows credentials and permissions to be updated externally without requiring nodes to be restarted.
- Password stored in hash-encrypted form. This is regarded as must-have when security is a concern. Corda currently supports a flexible password hash format conforming to the Modular Crypt Format provided by the Apache Shiro framework
设置 rpcUsers
提供了一个简单的方式来为一个固定的一些用户赋予 RPC 权限,但是有一些很明显的不足。为了支持更高安全和灵活性,Corda 提供了额外的安全功能,比如:
- 从外部的数据源获取用户的验证信息和权限信息(比如从一个远程的 RDBMS),带有可选的在内存中的 caching。特别的,这种方式允许验证信息和权限信息可以在外部进行更新,而不需要重新启动节点
- 密码以哈希加密过的形式存储。当安全是一个要素的时候这种方式就是必须要有了。Corda 当前支持由 Apache Shiro framework 提供的 Modular Crypt 格式的灵活地密码哈希格式。
These features are controlled by a set of options nested in the security
field of node.conf
.
The following example shows how to configure retrieval of users credentials and permissions from a remote database with
passwords in hash-encrypted format and enable in-memory caching of users data:
这些功能是由 node.conf
中 security
字段里的一系列选项来控制的。下边的例子演示了如何配置可从一个远程的数据库取回的用户验证信息和权限信息,密码是以哈希加密过的格式并且开启了用户数据在内存中 caching:
security = {
authService = {
dataSource = {
type = "DB"
passwordEncryption = "SHIRO_1_CRYPT"
connection = {
jdbcUrl = "<jdbc connection string>"
username = "<db username>"
password = "<db user password>"
driverClassName = "<JDBC driver>"
}
}
options = {
cache = {
expireAfterSecs = 120
maxEntries = 10000
}
}
}
}
It is also possible to have a static list of users embedded in the security
structure by specifying a dataSource
of INMEMORY
type:
也可以通过指定一个 INMEMORY
类型的 dataSource
来在 security
结构中指定一个用户的静态列表:
security = {
authService = {
dataSource = {
type = "INMEMORY"
users = [
{
username = "<username>"
password = "<password>"
permissions = ["<permission 1>", "<permission 2>", ...]
},
...
]
}
}
}
警告
A valid configuration cannot specify both the rpcUsers
and security
fields. Doing so will trigger
an exception at node startup.
警告
一个有效的配置不能够同时指定 rpcUsers
和 security
字段。这么做会在启动节点的时候造成异常
为数据鉴权和授权¶
The dataSource
structure defines the data provider supplying credentials and permissions for users. There exist two
supported types of such data source, identified by the dataSource.type
field:
dataSource
结构定义了数据提供者来提供用户的验证信息和权限信息。这里有两种支持的数据源类型,通过 dataSource.type
字段来定义:
INMEMORY: A static list of user credentials and permissions specified by the
users
field.INMEMORY: 通过
users
字段指定的用户验证信息和权限信息的静态列表。DB: An external RDBMS accessed via the JDBC connection described by
connection
. Note that, unlike theINMEMORY
case, in a user database permissions are assigned to roles rather than individual users. The current implementation expects the database to store data according to the following schema:
- Table
users
containing columnsusername
andpassword
. Theusername
column must have unique values.- Table
user_roles
containing columnsusername
androle_name
associating a user to a set of roles.- Table
roles_permissions
containing columnsrole_name
andpermission
associating a role to a set of permission strings.DB: 可以通过
connection
描述的 JDBC 连接的一个外部的 RDBMS。注意:不像INMEMORY
case,在一个用户的数据库中,权限是被分配给 角色 的而不是个人。当前的实现期望数据库根据下边的 schema 来存储数据:
users
表包含username
和password
列。username
列必须要是唯一的值user_roles
表包含username
和role_name
列,这会把一个用户跟一系列的 角色 关联起来roles_permissions
表包含role_name
和permission
列,这会把一个角色跟一系列的权限关联起来注解
There is no prescription on the SQL type of each column (although our tests were conducted on
username
androle_name
declared of SQL typeVARCHAR
andpassword
ofTEXT
type). It is also possible to have extra columns in each table alongside the expected ones.注解
这里并没有强制每个列的 SQL 类型(尽管我们的测试对于
username
和role_name
使用的是VARCHAR
SQL 类型,password
是TEXT
类型)。在每个表中也可以按照需要增加额外的列。
密码的加密¶
Storing passwords in plain text is discouraged in applications where security is critical. Passwords are assumed
to be in plain format by default, unless a different format is specified by the passwordEncryption
field, like:
当安全性是很重要的时候,将密码以明文的形式存储是不被鼓励的。密码默认是明文的格式,除非对 passwordEncryption
字段指定了不同的格式,比如:
passwordEncryption = SHIRO_1_CRYPT
SHIRO_1_CRYPT
identifies the Apache Shiro fully reversible
Modular Crypt Format,
it is currently the only non-plain password hash-encryption format supported. Hash-encrypted passwords in this
format can be produced by using the Apache Shiro Hasher command line tool.
SHIRO_1_CRYPT
表示 Apache Shiro fully reversible Modular Crypt Format,这是当前唯一支持的非明文密码的哈希加密格式。可以使用 Apache Shiro Hasher 命令行工具 来生成哈希加密密码。
缓存用户账户数据¶
A cache layer on top of the external data source of users credentials and permissions can significantly improve
performances in some cases, with the disadvantage of causing a (controllable) delay in picking up updates to the underlying data.
Caching is disabled by default, it can be enabled by defining the options.cache
field in security.authService
,
for example:
在用户验证信息和权限信息的外部数据源之上的一个 cache 层在很多情况下会很大程度的改善效率,但是会带来一个可控的对底层的数据的抓取延迟。Caching 默认是被 disabled,可以通过定义 security.authService 中的 options.cache
字段来开启,比如:
options = {
cache = {
expireAfterSecs = 120
maxEntries = 10000
}
}
This will enable a non-persistent cache contained in the node’s memory with maximum number of entries set to maxEntries
where entries are expired and refreshed after expireAfterSecs
seconds.
这个会开启一个包含在节点的内存中的非持久化的 cache,最大的输入数量设置为 maxEntries
,这个 entries 会在 expireAfterSecs
秒钟后过期。
Observables¶
The RPC system handles observables in a special way. When a method returns an observable, whether directly or as a sub-object of the response object graph, an observable is created on the client to match the one on the server. Objects emitted by the server-side observable are pushed onto a queue which is then drained by the client. The returned observable may even emit object graphs with even more observables in them, and it all works as you would expect.
RPC 系统使用一种特殊的方式来处理 observables。当一个方法返回 observable 的时候,或者直接返回,或者作为一个 response object graph 的子对象返回,客户端会创建一个跟服务器端匹配的 observable。服务器端 observable 发出的对象会被放进一个队列中,这个队列会被客户端来消费。返回的 observable 设置可能会发出含有更多的 observables 的 object graphs,它会像你期望的那样来工作的。
This feature comes with a cost: the server must queue up objects emitted by the server-side observable until you
download them. Note that the server side observation buffer is bounded, once it fills up the client is considered
slow and will be disconnected. You are expected to subscribe to all the observables returned, otherwise client-side
memory starts filling up as observations come in. If you don’t want an observable then subscribe then unsubscribe
immediately to clear the client-side buffers and to stop the server from streaming. For Kotlin users there is a
convenience extension method called notUsed()
which can be called on an observable to automate this step.
这个特性是要有一定的消耗的:server 必须要不断地接收由服务器端发出的对象直到你把他们下载下来。注意,服务器端的 observation buffer 是固定的,一旦被用光,客户端会变得很慢。你应该 subscribe 所有返回的 observables,否则的话随着 observations 的进入,客户端的内存就会不断的被使用。如果你不想要一个 observable,那么你先 subscribe 然后立即 unsubscribe,这样会清理掉客户端的 buffer,也会让 server 停止 streaming。对于 Kotlin 用户,这里有一个方便的称为 notUsed()
的扩展方法,可以在一个 observable 上调用来自动化这个步骤。
If your app quits then server side resources will be freed automatically.
如果你的 app 退出了的话,那么服务器端的资源会被自动释放。
警告
If you leak an observable on the client side and it gets garbage collected, you will get a warning
printed to the logs and the observable will be unsubscribed for you. But don’t rely on this, as garbage collection
is non-deterministic. If you set -Dnet.corda.client.rpc.trackRpcCallSites=true
on the JVM command line then
this warning comes with a stack trace showing where the RPC that returned the forgotten observable was called from.
This feature is off by default because tracking RPC call sites is moderately slow.
警告
如果你在客户端泄露了一个 observable,然后它得到了搜集到的垃圾,你会在 log 中看到一个警示提示,并且 observable 会被自动的 unsubscribe。但是不要完全依赖于这个,因为垃圾回收是无法预测( non-deterministic)的。如果你在 JVM 命令行上设置了 -Dnet.corda.client.rpc.trackRpcCallSites=true
,那么这个警告会带有一个 stack trace 显示了是在 RPC 的哪里返回的忘记 observable 是从哪里调用的。这个功能默认是关闭的,因为跟踪 RPC 调用网站会变慢。
注解
Observables can only be used as return arguments of an RPC call. It is not currently possible to pass Observables as parameters to the RPC methods. In other words the streaming is always server to client and not the other way around.
注解
Observables 仅仅能够作为一个 RPC 调用返回的参数来被使用。现在还不能够将 observables 作为参数传递给 RPC 方法。换句话说,流总会是从 server 到 client 的,不能是其他的方式。
未来¶
A method can also return a CordaFuture
in its object graph and it will be treated in a similar manner to
observables. Calling the cancel
method on the future will unsubscribe it from any future value and release
any resources.
一个方法也能够在它的 object graph 中返回一个 CordaFuture
并且会像对待 observables 的方式来对待它。在未来调用 cancel
方法会解除对任何未来的值的订阅并释放所有的资源。
版本¶
The client RPC protocol is versioned using the node’s platform version number (see 版本). When a proxy is created
the server is queried for its version, and you can specify your minimum requirement. Methods added in later versions
are tagged with the @RPCSinceVersion
annotation. If you try to use a method that the server isn’t advertising support
of, an UnsupportedOperationException
is thrown. If you want to know the version of the server, just use the
protocolVersion
property (i.e. getProtocolVersion
in Java).
客户端 RPC 协议是使用节点的 Platform Version (查看 版本)来定义版本的。当一个代理(proxy)被创建后,server 会查询它的版本,你可以指定你的最小版本要求。在后期的版本中被添加的方法都会带有 `@RPCSinceVersion
的注解。如果你使用了一个 server 不支持的方法,一个 UnsupportedOperationException
的异常会被抛出。如果你想知道 server 的版本,只需要使用 protocolVersion
属性(比如 Java 中的 getProtocolVersion
)。
The RPC client library defaults to requiring the platform version it was built with. That means if you use the client
library released as part of Corda N, then the node it connects to must be of version N or above. This is checked when
the client first connects. If you want to override this behaviour, you can alter the minimumServerProtocolVersion
field in the CordaRPCClientConfiguration
object passed to the client. Alternatively, just link your app against
an older version of the library.
线程安全¶
A proxy is thread safe, blocking, and allows multiple RPCs to be in flight at once. Any observables that are returned and you subscribe to will have objects emitted in order on a background thread pool. Each Observable stream is tied to a single thread, however note that two separate Observables may invoke their respective callbacks on different threads.
一个代理(proxy)是线程安全的、阻塞的(blocking)并且允许多个 RPCs 同时存在。任何返回的和你订阅的 observables 将会有对象会在后台运行的线程池中被有序地 emitted。每个 observable stream 会被绑定到一个单独的线程,但是要注意到的是,两个独立的 observables 可能在不同的线程上调用他们对应的 callbacks。
异常处理¶
If something goes wrong with the RPC infrastructure itself, an RPCException
is thrown. If you call a method that
requires a higher version of the protocol than the server supports, UnsupportedOperationException
is thrown.
Otherwise the behaviour depends on the devMode
node configuration option.
如果 RPC 基础架构本身出了问题,一个 RPCException
会被抛出。如果你调用了一个要求比当前 server 支持的版本更高的一个方法, UnsupportedOperationException
异常会被抛出。否则的话,这个行为会依赖于 devMode
的节点配置选项。
In devMode
, if the server implementation throws an exception, that exception is serialised and rethrown on the client
side as if it was thrown from inside the called RPC method. These exceptions can be caught as normal.
When not in devMode
, the server will mask exceptions not meant for clients and return an InternalNodeException
instead.
This does not expose internal information to clients, strengthening privacy and security. CorDapps can have exceptions implement
ClientRelevantError
to allow them to reach RPC clients.
重连 RPC clients¶
In the current version of Corda the RPC connection and all the observervables that are created by a client will just throw exceptions and die when the node or TCP connection become unavailable.
It is the client’s responsibility to handle these errors and reconnect once the node is running again. Running RPC commands against a stopped node will just throw exceptions. Previously created Observables will not emit any events after the node restarts. The client must explicitly re-run the command and re-subscribe to receive more events.
RPCs which have a side effect, such as starting flows, may have executed on the node even if the return value is not received by the client. The only way to confirm is to perform a business-level query and retry accordingly. The sample runFlowWithLogicalRetry helps with this.
In case users require such a functionality to write a resilient RPC client we have a sample that showcases how this can be implemented and also a thorough test that demonstrates it works as expected.
The code that performs the reconnecting logic is: ReconnectingCordaRPCOps.kt.
注解
This sample code is not exposed as an official Corda API, and must be included directly in the client codebase and adjusted.
The usage is showcased in the: RpcReconnectTests.kt. In case resiliency is a requirement, then it is recommended that users will write a similar test.
How to initialize the ReconnectingCordaRPCOps:
val bankAReconnectingRpc = ReconnectingCordaRPCOps(bankAAddress, demoUser.username, demoUser.password)
How to track the vault :
val vaultFeed = bankAReconnectingRpc.vaultTrackByWithPagingSpec(
Cash.State::class.java,
QueryCriteria.VaultQueryCriteria(),
PageSpecification(1, 1))
val vaultObserverHandle = vaultFeed.updates.asReconnecting().subscribe { update: Vault.Update<Cash.State> ->
log.info("vault update produced ${update.produced.map { it.state.data.amount }} consumed ${update.consumed.map { it.ref }}")
vaultEvents.add(update)
}
How to start a flow with a logical retry function that checks for the side effects of the flow:
bankAReconnectingRpc.runFlowWithLogicalRetry(
runFlow = { rpc ->
log.info("Starting CashIssueAndPaymentFlow for $amount")
val flowHandle = rpc.startTrackedFlowDynamic(
CashIssueAndPaymentFlow::class.java,
baseAmount.plus(Amount.parseCurrency("$amount USD")),
issuerRef,
bankB.nodeInfo.legalIdentities.first(),
false,
notary
)
val flowId = flowHandle.id
log.info("Started flow $amount with flowId: $flowId")
flowProgressEvents.addEvent(flowId, null)
// No reconnecting possible.
flowHandle.progress.subscribe(
{ prog ->
flowProgressEvents.addEvent(flowId, prog)
log.info("Progress $flowId : $prog")
},
{ error ->
log.error("Error thrown in the flow progress observer", error)
})
flowHandle.id
},
hasFlowStarted = { rpc ->
// Query for a state that is the result of this flow.
val criteria = QueryCriteria.VaultCustomQueryCriteria(builder { CashSchemaV1.PersistentCashState::pennies.equal(amount.toLong() * 100) }, status = Vault.StateStatus.ALL)
val results = rpc.vaultQueryByCriteria(criteria, Cash.State::class.java)
log.info("$amount - Found states ${results.states}")
// The flow has completed if a state is found
results.states.isNotEmpty()
},
onFlowConfirmed = {
flowsCountdownLatch.countDown()
log.info("Flow started for $amount. Remaining flows: ${flowsCountdownLatch.count}")
}
)
Note that, as shown by the test, during reconnecting some events might be lost.
// Check that enough vault events were received.
// This check is fuzzy because events can go missing during node restarts.
// Ideally there should be nrOfFlowsToRun events receive but some might get lost for each restart.
assertTrue(vaultEvents!!.size + nrFailures * 2 >= nrOfFlowsToRun, "Not all vault events were received")
Wire 安全¶
If TLS communications to the RPC endpoint are required the node should be configured with rpcSettings.useSSL=true
see 节点的配置.
The node admin should then create a node specific RPC certificate and key, by running the node once with generate-rpc-ssl-settings
command specified (see Node command-line options).
The generated RPC TLS trust root certificate will be exported to a certificates/export/rpcssltruststore.jks
file which should be distributed to the authorised RPC clients.
The connecting CordaRPCClient
code must then use one of the constructors with a parameter of type ClientRpcSslOptions
(JavaDoc) and set this constructor
argument with the appropriate path for the rpcssltruststore.jks
file. The client connection will then use this to validate the RPC server handshake.
Note that RPC TLS does not use mutual authentication, and delegates fine grained user authentication and authorisation to the RPC security features detailed above.
Corda 节点的白名单类¶
CorDapps must whitelist any classes used over RPC with Corda’s serialization framework, unless they are whitelisted by
default in DefaultWhitelist
. The whitelisting is done either via the plugin architecture or by using the
@CordaSerializable
annotation. See Object serialization. An example is shown in Using the client RPC API.
CorDapps 对于通过 RPC 所使用的任何类,都要使用 Corda 的序列化(serialization) framework 将他们添加至白名单,除非这些类已经在 DefaultWhitelist
中默认地被添加到白名单中了。添加白名单的操作既可以通过 plugin architecture 或者使用 @CordaSerializable
注解来实现。查看 Object serialization。在 Using the client RPC API 中也显示了一个例子。
创建本地节点¶
目录
手动创建节点¶
A node can be created manually by creating a folder that contains the following items:
The Corda JAR
- Can be downloaded from https://r3.bintray.com/corda/net/corda/corda/ (under /4.1-RC01/corda-4.1-RC01.jar)
A node configuration file entitled
node.conf
, configured as per 节点的配置A folder entitled
cordapps
containing any CorDapp JARs you want the node to loadOptional: A webserver JAR entitled
corda-webserver.jar
that will connect to the node via RPC- The (deprecated) default webserver can be downloaded from http://r3.bintray.com/corda/net/corda/corda-webserver/ (under /4.1-RC01/corda-webserver-4.1-RC01.jar)
- A Spring Boot alternative can be found here: https://github.com/corda/spring-webserver
一个节点可以通过创建一个包含下边项目的文件夹来创建:
Corda JAR
- 可以从 https://r3.bintray.com/corda/net/corda/corda/ 下载 (在 /4.1-RC01/corda-4.1-RC01.jar 下)
一个节点的配置文件为
node.conf
, 像 节点的配置 所说的那样进行配置一个名为
cordapps
文件夹包含了你想要节点加载的任何的 CorDapp JARs可选的: 一个名为
corda-webserver.jar
的 webserver JAR,可以通过 RPC 连接到节点- (已废弃的) 默认的 webserver 可以从 http://r3.bintray.com/corda/net/corda/corda-webserver/ 下载(在 /4.1-RC01/corda-webserver-4.1-RC01.jar 下)
- 一个 Spring Boot 能够在这里找到: https://github.com/corda/spring-webserver
The remaining files and folders described in 节点文件夹结构 will be generated at runtime.
在 节点文件夹结构 中描述的剩余的文件和文件夹将会在运行时生成。
Cordform 任务¶
Corda provides a gradle plugin called Cordform
that allows you to automatically generate and configure a set of
nodes for testing and demos. Here is an example Cordform
task called deployNodes
that creates three nodes, defined
in the Kotlin CorDapp Template:
Corda 提供了一个叫做 Cordform
的 gradle plugin,它允许你自动地生成和配置一套节点的信息用于测试和 demos。下边是一个叫做 deployNodes
的 Cordform
任务,它在 Kotlin CorDapp Template 项目中创建了 3 个节点:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
directory "./build/nodes"
nodeDefaults {
cordapps = [
"net.corda:corda-finance-contracts:$corda_release_version",
"net.corda:corda-finance-workflows:$corda_release_version",
"net.corda:corda-confidential-identities:$corda_release_version"
]
}
node {
name "O=Notary,L=London,C=GB"
// The notary will offer a validating notary service.
notary = [validating : true]
p2pPort 10002
rpcSettings {
port 10003
adminPort 10023
}
// No webport property, so no webserver will be created.
h2Port 10004
// Starts an internal SSH server providing a management shell on the node.
sshdPort 2223
extraConfig = [
// Setting the JMX reporter type.
jmxReporterType: 'JOLOKIA',
// Setting the H2 address.
h2Settings: [ address: 'localhost:10030' ]
]
}
node {
name "O=PartyA,L=London,C=GB"
p2pPort 10005
rpcSettings {
port 10006
adminPort 10026
}
webPort 10007
h2Port 10008
// Grants user1 all RPC permissions.
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
}
node {
name "O=PartyB,L=New York,C=US"
p2pPort 10009
rpcSettings {
port 10010
adminPort 10030
}
webPort 10011
h2Port 10012
// Grants user1 the ability to start the MyFlow flow.
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["StartFlow.net.corda.flows.MyFlow"]]]
}
}
Running this task will create three nodes in the build/nodes
folder:
- A
Notary
node that:- Offers a validating notary service
- Will not have a webserver (since
webPort
is not defined) - Is running the
corda-finance
CorDapp
PartyA
andPartyB
nodes that:- Are not offering any services
- Will have a webserver (since
webPort
is defined) - Are running the
corda-finance
CorDapp - Have an RPC user,
user1
, that can be used to log into the node via RPC
运行这个任务会在 build/nodes
文件夹下创建 3 个节点:
- 一个
Notary
节点:- 提供一个 validating notary 服务
- 不会有 webserver(因为
webPort
没有定义) - 会运行 corda-finance CorDapp
PartyA
和PartyB
节点:- 不提供任何服务
- 会有一个 webserver(因为
webPort
被定义了) - 运行
corda-finance
CorDapp - 有一个 RPC 用户 -
user1
,可以通过 RPC 登陆到节点
Additionally, all three nodes will include any CorDapps defined in the project’s source folders, even though these
CorDapps are not listed in each node’s cordapps
entry. This means that running the deployNodes
task from the
template CorDapp, for example, would automatically build and add the template CorDapp to each node.
另外,所有的三个节点都会包含任何在项目的 source 文件夹中定义的任何 CorDapp,即使这些 CorDapps 没有被列在每个节点的 cordapps
entry。这就意味着在 template CorDapp 中运行这个 deployNodes
任务会自动 build 并将这个 template CorDapp 添加到每个节点中。
You can extend deployNodes
to generate additional nodes.
你可以扩展这个 deployNodes
来生成更多的节点。
警告
When adding nodes, make sure that there are no port clashes!
警告
当添加节点的时候,要确保没有端口冲突!
To extend node configuration beyond the properties defined in the deployNodes
task use the configFile
property with the path (relative or absolute) set to an additional configuration file.
This file should follow the standard 节点的配置 format, as per node.conf. The properties from this file will be appended to the generated node configuration. Note, if you add a property already created by the ‘deployNodes’ task, both properties will be present in the file.
The path to the file can also be added while running the Gradle task via the -PconfigFile
command line option. However, the same file will be applied to all nodes.
Following the previous example PartyB
node will have additional configuration options added from a file none-b.conf
:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
[...]
node {
name "O=PartyB,L=New York,C=US"
[...]
// Grants user1 the ability to start the MyFlow flow.
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["StartFlow.net.corda.flows.MyFlow"]]]
configFile = "samples/trader-demo/src/main/resources/node-b.conf"
}
}
Cordform parameter drivers of the node entry lists paths of the files to be copied to the ./drivers subdirectory of the node. To copy the same file to all nodes ext.drivers can be defined in the top level and reused for each node via drivers=ext.drivers`.
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
ext.drivers = ['lib/my_common_jar.jar']
[...]
node {
name "O=PartyB,L=New York,C=US"
[...]
drivers = ext.drivers + ['lib/my_specific_jar.jar']
}
}
Signing Cordapp JARs¶
The default behaviour of Cordform is to deploy CorDapp JARs “as built”:
- prior to Corda 4 all CorDapp JARs were unsigned.
- as of Corda 4, CorDapp JARs created by the Gradle cordapp plugin are signed by a Corda development certificate by default.
The Cordform signing
entry can be used to override and customise the signing of CorDapp JARs.
Signing the CorDapp enables its contract classes to use signature constraints instead of other types of the constraints API: 合约约束.
The sign task may use an external keystore, or create a new one.
The signing
entry may contain the following parameters:
enabled
the control flag to enable signing process, by default is set tofalse
, set totrue
to enable signingall
if set totrue
(by default) all CorDapps inside cordapp subdirectory will be signed, otherwise iffalse
then only the generated Cordapp will be signedoptions
any relevant parameters of SignJar ANT task and GenKey ANT task, by default the JAR file is signed by Corda development key, the external keystore can be specified, the minimal list of required options is shown below, for other options referer to SignJar task:keystore
the path to the keystore file, by default cordadevcakeys.jks keystore is shipped with the pluginalias
the alias to sign under, the default value is cordaintermediatecastorepass
the keystore password, the default value is cordacadevpasskeypass
the private key password if it’s different than the password for the keystore, the default value is cordacadevkeypassstoretype
the keystore type, the default value is JKSdname
the distinguished name for entity, the option is used whengenerateKeystore true
onlykeyalg
the method to use when generating name-value pair, the value defaults to RSA as Corda doesn’t support DSA, the option is used whengenerateKeystore true
only
generateKeystore
the flag to generate a keystore, it is set tofalse
by default. If set totrue
then ad hock keystore is created and its key isused instead of the default Corda development key or any external key. The sameoptions
to specify an external keystore are used to define the newly created keystore. Additionallydname
andkeyalg
are required. Other options are described in GenKey task. If the existing keystore is already present the task will reuse it, however if the file is inside the build directory, then it will be deleted when Gradle clean task is run.
The example below shows the minimal set of options
needed to create a dummy keystore:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
signing {
enabled true
generateKeystore true
all false
options {
keystore "./build/nodes/jarSignKeystore.p12"
alias "cordapp-signer"
storepass "secret1!"
storetype "PKCS12"
dname "OU=Dummy Cordapp Distributor, O=Corda, L=London, C=GB"
keyalg "RSA"
}
}
//...
Contracts classes from signed CorDapp JARs will be checked by signature constraints by default.
You can force them to be checked by zone constraints by adding contract class names to includeWhitelist
entry,
the list will generate include_whitelist.txt file used internally by Network Bootstrapper tool.
Refer to API: 合约约束 to understand implication of different constraint types before adding includeWhitelist
to deployNodes
task.
The snippet below configures contracts classes from Finance CorDapp to be verified using zone constraints instead of signature constraints:
task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) {
includeWhitelist = [ "net.corda.finance.contracts.asset.Cash", "net.corda.finance.contracts.asset.CommercialPaper" ]
//...
Specifying a custom webserver¶
By default, any node listing a web port will use the default development webserver, which is not production-ready. You
can use your own webserver JAR instead by using the webserverJar
argument in a Cordform
node
configuration
block:
node {
name "O=PartyA,L=New York,C=US"
webPort 10005
webserverJar "lib/my_webserver.jar"
}
The webserver JAR will be copied into the node’s build
folder with the name corda-webserver.jar
.
警告
This is an experimental feature. There is currently no support for reading the webserver’s port from the
node’s node.conf
file.
The Dockerform task¶
The Dockerform
is a sister task of Cordform
that provides an extra file allowing you to easily spin up
nodes using docker-compose
. It supports the following configuration options for each node:
name
notary
cordapps
rpcUsers
useTestClock
There is no need to specify the nodes’ ports, as every node has a separate container, so no ports conflict will occur.
Every node will expose port 10003
for RPC connections.
The nodes’ webservers will not be started. Instead, you should interact with each node via its shell over SSH
(see the node configuration options). You have to enable the shell by adding the
following line to each node’s node.conf
file:
sshd { port = 2222 }
Where 2222
is the port you want to open to SSH into the shell.
Below you can find the example task from the IRS Demo included in the samples directory of main Corda GitHub repository:
def rpcUsersList = [
['username' : "user",
'password' : "password",
'permissions' : [
"StartFlow.net.corda.irs.flows.AutoOfferFlow\$Requester",
"StartFlow.net.corda.irs.flows.UpdateBusinessDayFlow\$Broadcast",
"StartFlow.net.corda.irs.api.NodeInterestRates\$UploadFixesFlow",
"InvokeRpc.vaultQueryBy",
"InvokeRpc.networkMapSnapshot",
"InvokeRpc.currentNodeTime",
"InvokeRpc.wellKnownPartyFromX500Name"
]]
]
// (...)
task deployNodes(type: net.corda.plugins.Dockerform, dependsOn: ['jar']) {
nodeDefaults {
cordapps = [
"net.corda:corda-finance-contracts:$corda_release_version",
"net.corda:corda-finance-workflows:$corda_release_version",
"net.corda:corda-confidential-identities:$corda_release_version"
]
}
node {
name "O=Notary Service,L=Zurich,C=CH"
notary = [validating : true]
rpcUsers = rpcUsersList
useTestClock true
}
node {
name "O=Bank A,L=London,C=GB"
rpcUsers = rpcUsersList
useTestClock true
}
node {
name "O=Bank B,L=New York,C=US"
rpcUsers = rpcUsersList
useTestClock true
}
node {
name "O=Regulator,L=Moscow,C=RU"
rpcUsers = rpcUsersList
useTestClock true
}
}
Running the Cordform/Dockerform tasks¶
To create the nodes defined in our deployNodes
task, run the following command in a terminal window from the root
of the project where the deployNodes
task is defined:
- Linux/macOS:
./gradlew deployNodes
- Windows:
gradlew.bat deployNodes
This will create the nodes in the build/nodes
folder. There will be a node folder generated for each node defined
in the deployNodes
task, plus a runnodes
shell script (or batch file on Windows) to run all the nodes at once
for testing and development purposes. If you make any changes to your CorDapp source or deployNodes
task, you will
need to re-run the task to see the changes take effect.
If the task is a Dockerform
task, running the task will also create an additional Dockerfile
in each node
directory, and a docker-compose.yml
file in the build/nodes
directory.
You can now run the nodes by following the instructions in Running a node.
Running nodes locally¶
目录
注解
You should already have generated your node(s) with their CorDapps installed by following the instructions in 创建本地节点.
There are several ways to run a Corda node locally for testing purposes.
Starting a Corda node using DemoBench¶
See the instructions in DemoBench.
Starting a Corda node from the command line¶
Run a node by opening a terminal window in the node’s folder and running:
java -jar corda.jar
By default, the node will look for a configuration file called node.conf
and a CorDapps folder called cordapps
in the current working directory. You can override the configuration file and workspace paths on the command line (e.g.
./corda.jar --config-file=test.conf --base-directory=/opt/corda/nodes/test
).
Optionally run the node’s webserver as well by opening a terminal window in the node’s folder and running:
java -jar corda-webserver.jar
警告
The node webserver is for testing purposes only and will be removed soon.
Setting JVM arguments¶
There are several ways of setting JVM arguments for the node process (particularly the garbage collector and the memory settings). They are listed here in order of increasing priority, i.e. if the same flag is set in a way later in this list, it will override anything set earlier.
Default arguments in capsule: | |
---|---|
The capsuled corda node has default flags set to |
|
Node configuration: | |
The node configuration file can specify custom default JVM arguments by adding a section like: custom = {
jvmArgs: [ '-Xmx1G', '-XX:+UseG1GC' ]
}
Note that this will completely replace any defaults set by capsule above, not just the flags that are set here, so if you use this to set e.g. the memory, you also need to set the garbage collector, or it will revert to whatever default your JVM is using. |
|
Capsule specific system property: | |
You can use a special system property that Capsule understands to set JVM arguments only for the Corda process, not the launcher that actually starts it: java -Dcapsule.jvm.args="-Xmx:1G" corda.jar
Setting a property like this will override any value for this property, but not interfere with any other JVM arguments that are configured
in any way mentioned above. In this example, it would reset the maximum heap memory to |
|
Command line flag: | |
You can set JVM args on the command line that apply to the launcher process and the node process as in the example above. This will override any value for the same flag set any other way, but will leave any other JVM arguments alone. |
Starting all nodes at once on a local machine from the command line¶
Native¶
If you created your nodes using deployNodes
, a runnodes
shell script (or batch file on Windows) will have been
generated to allow you to quickly start up all nodes and their webservers. runnodes
should only be used for testing
purposes.
Start the nodes with runnodes
by running the following command from the root of the project:
- Linux/macOS:
build/nodes/runnodes
- Windows:
call build\nodes\runnodes.bat
警告
On macOS, do not click/change focus until all the node terminal windows have opened, or some processes may fail to start.
If you receive an OutOfMemoryError
exception when interacting with the nodes, you need to increase the amount of
Java heap memory available to them, which you can do when running them individually. See
Starting a Corda node from the command line.
docker-compose¶
If you created your nodes using Dockerform
, the docker-compose.yml
file and corresponding Dockerfile
for
nodes has been created and configured appropriately. Navigate to build/nodes
directory and run docker-compose up
command. This will startup nodes inside new, internal network.
After the nodes are started up, you can use docker ps
command to see how the ports are mapped.
警告
You need both Docker
and docker-compose
installed and enabled to use this method. Docker CE
(Community Edition) is enough. Please refer to Docker CE documentation
and Docker Compose documentation for installation instructions for all
major operating systems.
Starting all nodes at once on a remote machine from the command line¶
By default, Cordform
expects the nodes it generates to be run on the same machine where they were generated.
In order to run the nodes remotely, the nodes can be deployed locally and then copied to a remote server.
If after copying the nodes to the remote machine you encounter errors related to localhost
resolution, you will additionally need to follow the steps below.
To create nodes locally and run on a remote machine perform the following steps:
Configure Cordform task and deploy the nodes locally as described in 创建本地节点.
Copy the generated directory structure to a remote machine using e.g. Secure Copy.
Optionally, bootstrap the network on the remote machine.
This is optional step when a remote machine doesn’t accept
localhost
addresses, or the generated nodes are configured to run on another host’s IP address.If required change host addresses in top level configuration files
[NODE NAME]_node.conf
for entriesp2pAddress
,rpcSettings.address
andrpcSettings.adminAddress
.Run the network bootstrapper tool to regenerate the nodes network map (see for more explanation Network Bootstrapper):
java -jar corda-tools-network-bootstrapper-Master.jar --dir <nodes-root-dir>
Run nodes on the remote machine using runnodes command.
The above steps create a test deployment as deployNodes
Gradle task would do on a local machine.
网络¶
What is a compatibility zone?¶
Every Corda node is part of a “zone” (also sometimes called a Corda network) that is permissioned. Production deployments require a secure certificate authority. We use the term “zone” to refer to a set of technically compatible nodes reachable over a TCP/IP network like the internet. The word “network” is used in Corda but can be ambiguous with the concept of a “business network”, which is usually more like a membership list or subset of nodes in a zone that have agreed to trade with each other.
How do I become part of a compatibility zone?¶
Bootstrapping a compatibility zone¶
You can easily bootstrap a compatibility zone for testing or pre-production use with either the Network Bootstrapper or the Corda Network Builder tools.
Joining an existing compatibility zone¶
After the testing and pre-production phases, users are encouraged to join an existing compatibility zone such as Corda Network (the main compatibility zone) or the Corda Testnet. See Joining an existing compatibility zone.
Setting up a dynamic compatibility zone¶
Some users may also be interested in setting up their own dynamic compatibility zone. For instructions and a discussion of whether this approach is suitable for you, see Setting up a dynamic compatibility zone.
Network certificates¶
目录
Certificate hierarchy¶
A Corda network has three types of certificate authorities (CAs):
- The root network CA that defines the extent of a compatibility zone
- The doorman CA that is used instead of the root network CA for day-to-day key signing to reduce the risk of the root network CA’s private key being compromised. This is equivalent to an intermediate certificate in the web PKI
- Each node also serves as its own CA, issuing the child certificates that it uses to sign its identity keys and TLS certificates
Each certificate contains an X.509 extension that defines the certificate/key’s role in the system (see below for details). It also uses X.509 name constraints to ensure that the X.500 names that encode human meaningful identities are propagated to all the child certificates properly. The following constraints are imposed:
- Doorman certificates are issued by a network root. Network root certs do not contain a role extension
- Node certificates are signed by a doorman certificate (as defined by the extension)
- Legal identity/TLS certificates are issued by a certificate marked as node CA
- Confidential identity certificates are issued by a certificate marked as well known legal identity
- Party certificates are marked as either a well known identity or a confidential identity
The structure of certificates above the doorman/network map is intentionally left untouched, as they are not relevant to the identity service and therefore there is no advantage in enforcing a specific structure on those certificates. The certificate hierarchy consistency checks are required because nodes can issue their own certificates and can set their own role flags on certificates, and it’s important to verify that these are set consistently with the certificate hierarchy design. As a side-effect this also acts as a secondary depth restriction on issued certificates.
We can visualise the permissioning structure as follows:

Key pair and certificate formats¶
The required key pairs and certificates take the form of the following Java-style keystores (this may change in future to
support PKCS#12 keystores) in the node’s <workspace>/certificates/
folder:
network-root-truststore.jks
, the network/zone operator’s root certificate as provided by them with a standard password. Can be deleted after initial registrationtruststore.jks
, the network/zone operator’s root certificate in keystore with a locally configurable password as protection against certain attacksnodekeystore.jks
, which stores the node’s identity key pairs and certificatessslkeystore.jks
, which stores the node’s TLS key pair and certificate
The key pairs and certificates must obey the following restrictions:
The certificates must follow the X.509v3 standard
The TLS certificates must follow the TLS v1.2 standard
The root network CA, doorman CA, and node CA keys, as well as the node TLS keys, must follow one of the following schemes:
- ECDSA using the NIST P-256 curve (secp256r1)
- ECDSA using the Koblitz k1 curve (secp256k1)
- RSA with 3072-bit key size or higher
The node CA certificates must have the basic constraints extension set to true
The TLS certificates must have the basic constraints extension set to false
Certificate role extension¶
Corda certificates have a custom X.509v3 extension that specifies the role the certificate relates to. This extension
has the OID 1.3.6.1.4.1.50530.1.1
and is non-critical, so implementations outside of Corda nodes can safely ignore it.
The extension contains a single ASN.1 integer identifying the identity type the certificate is for:
- Doorman
- Network map
- Service identity (currently only used as the shared identity in distributed notaries)
- Node certificate authority (from which the TLS and well-known identity certificates are issued)
- Transport layer security
- Well-known legal identity
- Confidential legal identity
In a typical installation, node administrators need not be aware of these. However, if node certificates are to be managed by external tools, such as those provided as part of an existing PKI solution deployed within an organisation, it is important to recognise these extensions and the constraints noted above.
Certificate path validation is extended so that a certificate must contain the extension if the extension was present in the certificate of the issuer.
网络地图¶
目录
The network map is a collection of signed NodeInfo
objects. Each NodeInfo is signed by the node it represents and
thus cannot be tampered with. It forms the set of reachable nodes in a compatibility zone. A node can receive these
objects from two sources:
- A network map server that speaks a simple HTTP based protocol.
- The
additional-node-infos
directory within the node’s directory.
Corda 中的网络地图是被签名的 NodeInfo
对象的集合。每个 NodeInfo 会被它所表示的节点进行加密签名,因此是不可篡改的。它形成了在一个兼容区域(compatibility zone)中的一系列的可交互的节点。一个节点可以通过以下两种来源来获得这些对象:
- 网络地图服务器通过基于 HTTP 协议进行传输
- 在节点路径下的
additional-node-infos
目录中
The network map server also distributes the parameters file that define values for various settings that all nodes need to agree on to remain in sync.
网络地图服务器也会分发一个包含网络参数的文件,这个网络参数文件定义了很多配置的参数值,并且所有的节点需要同意这些配置参数才能够在该网络中同步数据。
注解
In Corda 3 no implementation of the HTTP network map server is provided. This is because the details of how a compatibility zone manages its membership (the databases, ticketing workflows, HSM hardware etc) is expected to vary between operators, so we provide a simple REST based protocol for uploading/downloading NodeInfos and managing network parameters. A future version of Corda may provide a simple “stub” implementation for running test zones. In Corda 3 the right way to run a test network is through distribution of the relevant files via your own mechanisms. We provide a tool to automate the bulk of this task (see below).
注解
在 Corda 3 并没有提供一个 HTTP 网络地图服务器的实现。这是因为一个兼容区域要如何去管理它的成员在不同的操作者之间存在着很大的差异(数据库,问题处理流程,HSM 硬件等),所以我们提供了一个简单的基于 REST 的协议来上传/下载 NodeInfos 和管理网络参数。Corda 未来的版本可能会提供一个简单的 “stub” 的实现来运行测试区域(test zones)。在 Corda 3 的版本中,正确的运行一个测试网络的方式是使用你自己的机制来将相关的文件在不同节点间分发。我们提供了一个工具来自动进行这些任务(下边有详细的描述)。
HTTP 网络地图协议¶
If the node is configured with the compatibilityZoneURL
config then it first uploads its own signed NodeInfo
to the server at that URL (and each time it changes on startup) and then proceeds to download the entire network map from
the same server. The network map consists of a list of NodeInfo
hashes. The node periodically polls for the network map
(based on the HTTP cache expiry header) and any new entries are downloaded and cached. Entries which no longer exist are deleted from the node’s cache.
如果节点的配置中配置了 compatibilityZoneURL
的话,那么当它首先会将自己签名过的 NodeInfo
文件上传到服务器上(之后每次启动的时候如果 NodeInfo
有变化的话,也会重新上传),然后会下载整个网络地图。网络地图包含了一个 NodeInfo 哈希的列表。节点会定期地重新获取网络地图(基于 HTTP 缓存的过期 header 设置),新的节点信息会被下载并且加入到缓存中。已经不存在的节点信息会从节点的缓存中被删除。
The set of REST end-points for the network map service are as follows.
网络地图服务的 REST end-points 包括:
Request method | Path | Description |
---|---|---|
POST | /network-map/publish | For the node to upload its signed NodeInfo object to the network map. |
POST | /network-map/ack-parameters | For the node operator to acknowledge network map that new parameters were accepted for future update. |
GET | /network-map | Retrieve the current signed public network map object. The entire object is signed with the network map certificate which is also attached. |
GET | /network-map/{uuid} | Retrieve the current signed private network map object with given uuid. Format is the same as for /network-map endpoint. |
GET | /network-map/node-info/{hash} | Retrieve a signed NodeInfo as specified in the network map object. |
GET | /network-map/network-parameters/{hash} | Retrieve the signed network parameters (see below). The entire object is signed with the network map certificate which is also attached. |
GET | /network-map/my-hostname | Retrieve the IP address of the caller (and not of the network map). |
Network maps hosted by R3 or other parties using R3’s commercial network management tools typically also provide the following endpoints as a convenience to operators and other users
由 R3 或者其他使用 R3 的商业网络管理工具 host 的网络地图通常也会提供下边的 endpoints 来为维护者和其他用户提供方便
注解
we include them here as they can aid debugging but, for the avoidance of doubt, they are not a formal part of the spec and the node will operate even in their absence.
注解
我们包含了他们是因为他们能够帮助 debugging,但是为了避免疑问,他们通常不是说明中常规的部分,并且节点即使不存在的话也能够工作。
Request method | Path | Description |
---|---|---|
GET | /network-map/json | Retrieve the current public network map formatted as a JSON document. |
GET | /network-map/json/{uuid} | Retrieve the current network map for a private network indicated by the uuid parameter formatted as a JSON document. |
GET | /network-map/json/node-infos | Retrieve a human readable list of the currently registered NodeInfo files in the public network formatted as a JSON document. |
GET | /network-map/json/node-infos/{uid} | Retrieve a human readable list of the currently registered NodeInfo files in the specified private network map. |
GET | /network-map/json/node-info/{hash} | Retrieve a human readable version of a NodeInfo formatted as a JSON document. |
HTTP is used for the network map service instead of Corda’s own AMQP based peer to peer messaging protocol to enable the server to be placed behind caching content delivery networks like Cloudflare, Akamai, Amazon Cloudfront and so on. By using industrial HTTP cache networks the map server can be shielded from DoS attacks more effectively. Additionally, for the case of distributing small files that rarely change, HTTP is a well understood and optimised protocol. Corda’s own protocol is designed for complex multi-way conversations between authenticated identities using signed binary messages separated into parallel and nested flows, which isn’t necessary for network map distribution.
additional-node-infos
目录¶
Alongside the HTTP network map service, or as a replacement if the node isn’t connected to one, the node polls the
contents of the additional-node-infos
directory located in its base directory. Each file is expected to be the same
signed NodeInfo
object that the network map service vends. These are automatically added to the node’s cache and can
be used to supplement or replace the HTTP network map. If the same node is advertised through both mechanisms then the
latest one is taken.
跟 HTTP 网络地图服务一起,或者说是它的一种替代方案,当节点并没有连接到一个网络地图服务的话,节点会从自己根目录下的 additional-node-infos
路径下读取网络地图内容。该路径下的每个文件应该同网络地图服务签名加密的 NodeInfo
文件相同。这些会被自动地加到节点的网络地图缓存中并可以被用来补充或者替换 HTTP 网络地图。如果同一个节点同时在这两种机制中存在,那么 additional-node-infos 中的网络地图会最终被使用。
On startup the node generates its own signed node info file, filename of the format nodeInfo-${hash}
. It can also be
generated using the generate-node-info
sub-command without starting the node. To create a simple network
without the HTTP network map service simply place this file in the additional-node-infos
directory of every node that’s
part of this network. For example, a simple way to do this is to use rsync.
当节点启动的时候,会生成自己签名过的 node info 文件,文件名的格式为 nodeInfo-${hash}`
。也可以使用 generate-node-info
命令行标识不启动节点也可以生成 node info 文件。如果不想使用 HTTP 网络地图服务来创建一个简单的网络的话,简单地将这些文件放在网络中每个节点的 additional-node-infos
路径下即可。比如一个简单的方式来做这个工作就是使用 rsync。
Usually, test networks have a structure that is known ahead of time. For the creation of such networks we provide a
network-bootstrapper
tool. This tool pre-generates node configuration directories if given the IP addresses/domain
names of each machine in the network. The generated node directories contain the NodeInfos for every other node on
the network, along with the network parameters file and identity certificates. Generated nodes do not need to all be
online at once - an offline node that isn’t being interacted with doesn’t impact the network in any way. So a test
cluster generated like this can be sized for the maximum size you may need, and then scaled up and down as necessary.
通常,测试网络事先都会知道应该具有怎样的一个架构。对于创建这种网络,我们提供了一个 network-bootstrapper
工具。如果给定了网络中每个节点的 IP 地址/域名的话,这个工具能够生成节点的配置文件路径。生成的节点配置路径中包含了网络中其他节点的 NodeInfos 文件,同时也会包含网络参数文件和身份证书(identity certificates)。生成的节点不需要同时一直保持在线 - 一个不需要进行互动的离线节点不会影响该网络的运行。所以一个这样生成的测试集群可以按照你的需要扩展到最大的 size,然后可以按照需要进行扩大或缩小。
More information can be found in Network Bootstrapper.
更多信息可以在 Network Bootstrapper 中找到。
网络参数¶
Network parameters are a set of values that every node participating in the zone needs to agree on and use to correctly interoperate with each other. They can be thought of as an encapsulation of all aspects of a Corda deployment on which reasonable people may disagree. Whilst other blockchain/DLT systems typically require a source code fork to alter various constants (like the total number of coins in a cryptocurrency, port numbers to use etc), in Corda we have refactored these sorts of decisions out into a separate file and allow “zone operators” to make decisions about them. The operator signs a data structure that contains the values and they are distributed along with the network map. Tools are provided to gain user opt-in consent to a new version of the parameters and ensure everyone switches to them at the same time.
网络参数是一系列的值,在该网络中的所有节点必须要同意这些参数值才能够彼此正确地进行互通。这个可以被理解为是一个 Corda 部署的所有方面的一个封装,有些节点可能会不同意某些方面。然而其他的区块链/DLT 系统通常会要求一个源代码的分叉(fork)来变更不同的约束(比如在一个加密货币中货币的总量,应该使用的端口号等),在 Corda 中,我们对这些选择进行了改进并将它们放进了一个额外的文件中并且允许 “区域操作者”(zone operators)来对他们进行选择。区域操作者会对包含这些参数的一个数据结构进行签名,然后这个数据结构会同网络地图一起被分发出去。工具被提供用来获得用户对一个新版本的参数的同意,并且确保每个人都会在同一时间切换到新的网络参数。
If the node is using the HTTP network map service then on first startup it will download the signed network parameters,
cache it in a network-parameters
file and apply them on the node.
如果节点使用的是 HTTP 网络地图服务的话,那么在第一次启动的时候它会下载被签名过的网络参数文件,把它缓存在一个 network-parameters
文件中并且将它应用到了该节点上。
警告
If the network-parameters
file is changed and no longer matches what the network map service is advertising
then the node will automatically shutdown. Resolution to this is to delete the incorrect file and restart the node so
that the parameters can be downloaded again.
警告
如果 network-parameters
文件被改动了并且同网络地图服务发布的版本不一样了的话,节点就会自动关闭。解决这个问题的方法就是删除这个不正确的文件然后重启节点,所以网络参数就可以被再次地下载。
If the node isn’t using a HTTP network map service then it’s expected the signed file is provided by some other means. For such a scenario there is the network bootstrapper tool which in addition to generating the network parameters file also distributes the node info files to the node directories.
如果节点并没有使用 HTTP 网络地图服务的话,那么这些被签名的文件应该通过另外一种方式被获取。针对于这种情况,这里有一个网络启动器(network bootstrapper)的工具可以用来生成网络参数文件并且连同 node info 文件一同被分发到节点的相关路径下。
The current set of network parameters:
minimumPlatformVersion: | |
---|---|
The minimum platform version that the nodes must be running. Any node which is below this will not start. | |
notaries: | List of identity and validation type (either validating or non-validating) of the notaries which are permitted in the compatibility zone. |
maxMessageSize: | Maximum allowed size in bytes of an individual message sent over the wire. Note that attachments are a special case and may be fragmented for streaming transfer, however, an individual transaction or flow message may not be larger than this value. |
maxTransactionSize: | |
Maximum allowed size in bytes of a transaction. This is the size of the transaction object and its attachments. | |
modifiedTime: | The time when the network parameters were last modified by the compatibility zone operator. |
epoch: | Version number of the network parameters. Starting from 1, this will always increment whenever any of the parameters change. |
whitelistedContractImplementations: | |
List of whitelisted versions of contract code. For each contract class there is a list of SHA-256 hashes of the approved CorDapp jar versions containing that contract. Read more about Zone constraints here API: 合约约束 | |
eventHorizon: | Time after which nodes are considered to be unresponsive and removed from network map. Nodes republish their
NodeInfo on a regular interval. Network map treats that as a heartbeat from the node. |
packageOwnership: | |
List of the network-wide java packages that were successfully claimed by their owners. Any CorDapp JAR that offers contracts and states in any of these packages must be signed by the owner. This ensures that when a node encounters an owned contract it can uniquely identify it and knows that all other nodes can do the same. Encountering an owned contract in a JAR that is not signed by the rightful owner is most likely a sign of malicious behaviour, and should be reported. The transaction verification logic will throw an exception when this happens. Read more about Package ownership here ). |
当前的网络参数配置项包括:
minimumPlatformVersion: | |
---|---|
节点运行所需的最小平台版本号。任何运行小于该版本号的节点将不会被启动。 | |
notaries: | 在 compatibility zone 中允许的 notaries 的身份和验证类别(validating 或者是 non-validating)的列表。 |
maxMessageSize: | Maximum allowed size in bytes of an individual message sent over the wire. Note that attachments are a special case and may be fragmented for streaming transfer, however, an individual transaction or flow message may not be larger than this value. |
maxTransactionSize: | |
对于一个 transaction 所允许的最大 size。这个尺寸是指 transaction 对象和它的附件共同的大小。 | |
modifiedTime: | 网络参数被 compatibility zone 操作者最后一个更新的时间。 |
epoch: | 网络参数的版本号。从1开始,当有任何的参数变化的时候,这个版本号会自动增加。 |
whitelistedContractImplementations: | |
被添加到白名单中的合约代码(contract code)的列表。对于每一个合约的类,这里会有一个经过批准的包含该合约类的 CorDapp jar 的 SHA-256 hashes 的列表。在这里 API: 合约约束 阅读更多关于 Zone constraints 的信息。 | |
eventHorizon: | 代表经过多久之后节点会被认为是没有反应的并会被移除出网络地图。节点可以在固定的一个周期中再次发布他们的 NodeInfo 。网络地图会把这种定期的操作看作是来自于节点的心跳。 |
packageOwnership: | |
List of the network-wide java packages that were successfully claimed by their owners. Any CorDapp JAR that offers contracts and states in any of these packages must be signed by the owner. This ensures that when a node encounters an owned contract it can uniquely identify it and knows that all other nodes can do the same. Encountering an owned contract in a JAR that is not signed by the rightful owner is most likely a sign of malicious behaviour, and should be reported. The transaction verification logic will throw an exception when this happens. Read more about Package ownership here ). |
More parameters will be added in future releases to regulate things like allowed port numbers, whether or not IPv6 connectivity is required for zone members, required cryptographic algorithms and roll-out schedules (e.g. for moving to post quantum cryptography), parameters related to SGX and so on.
更过的参数会在将来被添加,来规定像被允许的端口号、在一个节点在清除之前他可以保持离线多久、对于 zone 成员是不是必须要求 IPv6 的连接、需要的加密算法和推出的时间安排(比如,换成了 post quantum 加密)、有关 SGX 的参数等等。
Network parameters update process¶
Network parameters are controlled by the zone operator of the Corda network that you are a member of. Occasionally, they may need to change these parameters. There are many reasons that can lead to this decision: adding a notary, setting new fields that were added to enable smooth network interoperability, or a change of the existing compatibility constants is required, for example.
注解
A future release may support the notion of phased roll-out of network parameter changes.
Updating of the parameters by the zone operator is done in two phases: 1. Advertise the proposed network parameter update to the entire network. 2. Switching the network onto the new parameters - also known as a flag day.
The proposed parameter update will include, along with the new parameters, a human-readable description of the changes as well as the deadline for accepting the update. The acceptance deadline marks the date and time that the zone operator intends to switch the entire network onto the new parameters. This will be a reasonable amount of time in the future, giving the node operators time to inspect, discuss and accept the parameters.
The fact a new set of parameters is being advertised shows up in the node logs with the message
“Downloaded new network parameters”, and programs connected via RPC can receive ParametersUpdateInfo
by using
the CordaRPCOps.networkParametersFeed
method. Typically a zone operator would also email node operators to let them
know about the details of the impending change, along with the justification, how to object, deadlines and so on.
/**
* Data class containing information about the scheduled network parameters update. The info is emitted every time node
* receives network map with [ParametersUpdate] which wasn't seen before. For more information see: [CordaRPCOps.networkParametersFeed]
* and [CordaRPCOps.acceptNewNetworkParameters].
* @property hash new [NetworkParameters] hash
* @property parameters new [NetworkParameters] data structure
* @property description description of the update
* @property updateDeadline deadline for accepting this update using [CordaRPCOps.acceptNewNetworkParameters]
*/
@CordaSerializable
data class ParametersUpdateInfo(
val hash: SecureHash,
val parameters: NetworkParameters,
val description: String,
val updateDeadline: Instant
)
Auto Acceptance¶
If the only changes between the current and new parameters are for auto-acceptable parameters then, unless configured otherwise, the new
parameters will be accepted without user input. The following parameters with the @AutoAcceptable
annotation are auto-acceptable:
/**
* Network parameters are a set of values that every node participating in the zone needs to agree on and use to
* correctly interoperate with each other.
*
* @property minimumPlatformVersion Minimum version of Corda platform that is required for nodes in the network.
* @property notaries List of well known and trusted notary identities with information on validation type.
* @property maxMessageSize This is currently ignored. However, it will be wired up in a future release.
* @property maxTransactionSize Maximum permitted transaction size in bytes.
* @property modifiedTime ([AutoAcceptable]) Last modification time of network parameters set.
* @property epoch ([AutoAcceptable]) Version number of the network parameters. Starting from 1, this will always increment on each new set
* of parameters.
* @property whitelistedContractImplementations ([AutoAcceptable]) List of whitelisted jars containing contract code for each contract class.
* This will be used by [net.corda.core.contracts.WhitelistedByZoneAttachmentConstraint].
* [You can learn more about contract constraints here](https://docs.corda.net/api-contract-constraints.html).
* @property packageOwnership ([AutoAcceptable]) List of the network-wide java packages that were successfully claimed by their owners.
* Any CorDapp JAR that offers contracts and states in any of these packages must be signed by the owner.
* @property eventHorizon Time after which nodes will be removed from the network map if they have not been seen
* during this period
*/
@KeepForDJVM
@CordaSerializable
data class NetworkParameters(
val minimumPlatformVersion: Int,
val notaries: List<NotaryInfo>,
val maxMessageSize: Int,
val maxTransactionSize: Int,
@AutoAcceptable val modifiedTime: Instant,
@AutoAcceptable val epoch: Int,
@AutoAcceptable val whitelistedContractImplementations: Map<String, List<AttachmentId>>,
val eventHorizon: Duration,
@AutoAcceptable val packageOwnership: Map<String, PublicKey>
) {
This behaviour can be turned off by setting the optional node configuration property NetworkParameterAcceptanceSettings.autoAcceptEnabled
to false
. For example:
...
NetworkParameterAcceptanceSettings {
autoAcceptEnabled = false
}
...
It is also possible to switch off this behaviour at a more granular parameter level. This can be achieved by specifying the set of
@AutoAcceptable
parameters that should not be auto-acceptable in the optional
NetworkParameterAcceptanceSettings.excludedAutoAcceptableParameters
node configuration property.
For example, auto-acceptance can be switched off for any updates that change the packageOwnership
map by adding the following to the
node configuration:
...
NetworkParameterAcceptanceSettings {
excludedAutoAcceptableParameters: ["packageOwnership"]
}
...
Manual Acceptance¶
If the auto-acceptance behaviour is turned off via the configuration or the network parameters change involves parameters that are not auto-acceptable then manual approval is required.
In this case the node administrator can review the change and decide if they are going to accept it. The approval should be done before the update Deadline. Nodes that don’t approve before the deadline will likely be removed from the network map by the zone operator, but that is a decision that is left to the operator’s discretion. For example the operator might also choose to change the deadline instead.
If the network operator starts advertising a different set of new parameters then that new set overrides the previous set. Only the latest update can be accepted.
To send back parameters approval to the zone operator, the RPC method fun acceptNewNetworkParameters(parametersHash: SecureHash)
has to be called with parametersHash
from the update. Note that approval cannot be undone. You can do this via the Corda
shell (see Node shell):
run acceptNewNetworkParameters parametersHash: "ba19fc1b9e9c1c7cbea712efda5f78b53ae4e5d123c89d02c9da44ec50e9c17d"
If the administrator does not accept the update then next time the node polls network map after the deadline, the advertised network parameters will be the updated ones. The previous set of parameters will no longer be valid. At this point the node will automatically shutdown and will require the node operator to bring it back again.
Cleaning the network map cache¶
Sometimes it may happen that the node ends up with an inconsistent view of the network. This can occur due to changes in deployment leading to stale data in the database, different data distribution time and mistakes in configuration. For these unlikely events both RPC method and command line option for clearing local network map cache database exist. To use them you either need to run from the command line:
java -jar corda.jar clear-network-cache
or call RPC method clearNetworkMapCache (it can be invoked through the node’s shell as run clearNetworkMapCache, for more information on how to log into node’s shell see Node shell). As we are testing and hardening the implementation this step shouldn’t be required. After cleaning the cache, network map data is restored on the next poll from the server or filesystem.
Cipher suites supported by Corda¶
The set of signature schemes supported forms a part of the consensus rules for a Corda DLT network. Thus, it is important that implementations do not support pluggability of any crypto algorithms and do take measures to prevent algorithms supported by any underlying cryptography library from becoming accidentally accessible. Signing a transaction with an algorithm that is not a part of the base specification would result in a transaction being considered invalid by peer nodes and thus a loss of consensus occurring. The introduction of new algorithms over time will require a global upgrade of all nodes.
Corda has been designed to be cryptographically agile, in the sense that the available set of signature schemes is carefully selected based on various factors, such as provided security-level and cryptographic strength, compatibility with various HSM vendors, algorithm standardisation, variety of cryptographic primitives, business demand, option for post-quantum resistance, side channel security, efficiency and rigorous testing.
Before we present the pool of supported schemes it is useful to be familiar with Network certificates and API: Identity. An important design decision in Corda is its shared hierarchy between the TLS and Node Identity certificates.
Certificate hierarchy¶
A Corda network has 8 types of keys and a regular node requires 4 of them:
Network Keys
- The root network CA key
- The doorman CA key
- The network map key
- The service identity key(s) (per service, such as a notary cluster; it can be a Composite key)
Node Keys
- The node CA key(s) (one per node)
- The legal identity key(s) (one per node)
- The tls key(s) (per node)
- The confidential identity key(s) (per node)
We can visualise the certificate structure as follows (for a detailed description of cert-hierarchy, see Network certificates):

Supported cipher suites¶
Due to the shared certificate hierarchy, the following 4 key/certificate types: root network CA, doorman CA, node CA and tls should be compatible with the standard TLS 1.2 protocol. The latter is a requirement from the TLS certificate-path validator. It is highlighted that the rest of the keys can be any of the 5 supported cipher suites. For instance, network map is ECDSA NIST P-256 (secp256r1) in the Corda Network (CN) as it is well-supported by the underlying HSM device, but the default for dev-mode is Pure EdDSA (ed25519).
The following table presents the 5 signature schemes currently supported by Corda. The TLS column shows which of them are compatible with TLS 1.2, while the default scheme per key type is also shown in the last column.
Cipher suite | Description | TLS | Default for |
---|---|---|---|
Pure EdDSA using the
ed25519 curve
and SHA-512
|
EdDSA represents the current state of the art in mainstream
cryptography. It implements elliptic curve cryptography
with deterministic signatures a fast implementation,
explained constants, side channel resistance and many other
desirable characteristics. However, it is relatively new
and not widely supported, for example, you can’t use it in
TLS yet (a draft RFC exists but is not standardised yet).
|
NO |
|
ECDSA using the
NIST P-256 curve
(secp256r1)
and SHA-256
|
This is the default choice for most systems that support
elliptic curve cryptography today and is recommended by
NIST. It is also supported by the majority of the HSM
vendors.
|
YES |
|
ECDSA using the
Koblitz k1 curve
(secp256k1)
and SHA-256
|
secp256k1 is the curve adopted by Bitcoin and as such there
is a wealth of infrastructure, code and advanced algorithms
designed for use with it. This curve is standardised by
NIST as part of the “Suite B” cryptographic algorithms and
as such is more widely supported than ed25519. By
supporting it we gain access to the ecosystem of advanced
cryptographic techniques and devices pioneered by the
Bitcoin community.
|
NO | |
RSA (3072bit) PKCS#1
and SHA-256
|
RSA is well supported by any sort of hardware or software
as a signature algorithm no matter how old, for example,
legacy HSMs will support this along with obsolete operating
systems. RSA is using bigger keys than ECDSA and thus it is
recommended for inclusion only for its backwards
compatibility properties, and only for usage where legacy
constraints or government regulation forbids the usage of
more modern approaches.
|
YES | |
SPHINCS-256
and SHA-512
(experimental)
|
SPHINCS-256 is a post-quantum secure algorithm that relies
only on hash functions. It is included as a hedge against
the possibility of a malicious adversary obtaining a
quantum computer capable of running Shor’s algorithm in
future. SPHINCS is based ultimately on a clever usage of
Merkle hash trees. Hash functions are a very heavily
studied and well understood area of cryptography. Thus, it
is assumed that there is a much lower chance of
breakthrough attacks on the underlying mathematical
problems. However, SPHINCS uses relatively big public keys,
it is slower and outputs bigger signatures than EdDSA,
ECDSA and RSA algorithms.
|
NO |
Joining an existing compatibility zone¶
To connect to a compatibility zone you need to register with its certificate signing authority (or doorman) by submitting a certificate signing request (CSR) to obtain a valid identity for the zone. This process is only necessary when the node connects to the network for the first time, or when the certificate expires. You could do this out of band, for instance via email or a web form, but there’s also a simple request/response utility built into the node.
Before using this utility, you must first have received the trust store file containing the root certificate from the zone operator. For high security zones, this might be delivered physically. Then run the following command:
java -jar corda.jar --initial-registration --network-root-truststore-password <trust store password>
By default, the utility expects the trust store file to be in the location certificates/network-root-truststore.jks
.
This can be overridden using the additional --network-root-truststore
flag.
The utility performs the following steps:
- It creates a certificate signing request based on the following information from the node’s configuration file (see
节点的配置):
- myLegalName Your company’s legal name as an X.500 string. X.500 allows differentiation between entities with the same name, as the legal name needs to be unique on the network. If another node has already been permissioned with this name then the permissioning server will automatically reject the request. The request will also be rejected if it violates legal name rules, see node_naming for more information. You can use the X.500 schema to disambiguate entities that have the same or similar brand names
- emailAddress e.g. “admin@company.com”
- devMode must be set to false
- compatibilityZoneURL or networkServices The address(es) used to register with the compatibility zone and
retrieve the network map. These should be provided to you by the operator of the zone. This must be either:
- compatibilityZoneURL The root address of the network management service. Use this if both the doorman and the network map service are operating on the same URL endpoint
- networkServices The root addresses of the doorman and the network map service. Use this if the doorman and the
network map service are operating on the same URL endpoint, where:
- doormanURL is the root address of the doorman. This is the address used for initial registration
- networkMapURL is the root address of the network map service
- It generates a new private/public keypair to sign the certificate signing request
- It submits the request to the doorman server and polls periodically to retrieve the corresponding certificates
- It creates the node’s keystore and trust store using the received certificates
- It creates and stores the node’s TLS keys and legal identity key along with their corresponding certificate-chains
注解
You can exit the utility at any time if the approval process is taking longer than expected. The request
process will resume on restart as long as the --initial-registration
flag is specified.
Joining Corda Testnet¶
目录
The Corda Testnet is an open public network of Corda nodes on the internet. It is designed to be a complement to the Corda Network where any entity can transact real world value with any other counterparty in the context of any application. The Corda Testnet is designed for “non-production” use in a genuine global context of Corda nodes, including but not limited to CorDapp development, multi-party testing, demonstration and showcasing of applications and services, learning, training and development of the Corda platform technology and specific applications of Corda.
The Corda Testnet is based on exactly the same technology as the main Corda Network, but can be joined on a self-service basis through the automated provisioning system described below.
The Corda Testnet is currently in private beta. Interested parties can request in invitation to join the Corda Testnet by completing a short request form (see below).
Deploying a Corda node to the Corda Testnet¶
Access to the Corda Testnet is enabled by visiting https://testnet.corda.network.

Click on “Join the Corda Testnet”.
Select whether you want to register a company or as an individual on the Testnet.
This will create an account with the Testnet on-boarding application which will enable you to provision and manage multiple Corda nodes on Testnet. You will log in to this account to view and manage you Corda Testnet identity certificates.

Fill in the form with your details.
注解
Testnet is currently invitation only. If your request is approved you will receive an email. Please fill in as many details as possible as it helps us prioritise requests. The approval process will take place daily by a member of the r3 operations team reviewing all invite requests and making a decision based on current rate of onboarding of new customers.

Once you have been approved, navigate to https://testnet.corda.network and click on “I have an invitation”.
Sign in using either your email address and password, or “Sign in with Google”:

If using Google accounts, approve the Testnet application when prompted:

注解
At this point you may need to verify your email address is valid (if you are not using a Gmail address). If prompted check your email and click on the link to validate then return to the sign in page and sign in again.
Next agree to the terms of service:

You can now choose how to deploy your Corda node to the Corda Testnet. We strongly recommend hosting your Corda node on a public cloud resource.
Select the cloud provider you wish to use for documentation on how to specifically configure Corda for that environment.

Once your cloud instance is set up you can install and run your Testnet pre-provisioned Corda node by clicking on “Copy” and pasting the one time link into your remote cloud terminal.
The installation script will download the Corda binaries as well as your PKI certificates, private keys and supporting files and will install and run Corda on your fresh cloud VM. Your node will register itself with the Corda Testnet when it first runs and be added to the global network map and be visible to counterparties after approximately 5 minutes.
Hosting a Corda node locally is possible but will require manually configuring firewall and port forwarding on your local router. If you want this option then click on the “Download” button to download a Zip file with a pre-configured Corda node.
注解
If you host your node on your own machine or a corporate server you must ensure it is reachable from the public internet at a specific IP address. Please follow the instructions here: Deploying Corda to Corda Testnet from your local environment.
A note on identities on Corda Testnet¶
Unlike the main Corda Network, which is designed for verified real world identities, The Corda Testnet automatically assigns a “distinguished name” as your identity on the network. This is to prevent name abuse such as the use of offensive language in the names or name squatting. This allows the provision of a node to be automatic and instantaneous. It also enables the same user to safely generate many nodes without accidental name conflicts. If you require a human readable name then please contact support and a partial organisation name can be approved.
Deploying Corda to Testnet¶
Deploying Corda to Corda Testnet from an Azure Cloud Platform VM¶
目录
This document will describe how to set up a virtual machine on the Azure Cloud Platform to deploy your pre-configured Corda node and automatically connnect to Testnet. A self-service download link can be obtained from https://testnet.corda.network.
Pre-requisites¶
- Ensure you have a registered Microsoft Azure account which can create virtual machines.
Deploy Corda node¶
Browse to https://portal.azure.com and log in with your Microsoft account.
STEP 1: Create a Resource Group¶
Click on the “Resource groups” link in the side nav in the Azure Portal and then click “Add”:

Fill in the form and click “Create”:

STEP 2: Launch the VM¶
At the top of the left sidenav click on the button with the green cross “Create a resource”.
In this example we are going to use an Ubuntu server so select the latest Ubuntu Server option:

Fill in the form:
- Add a username (to log into the VM) and choose and enter a password
- Choose the resource group we created earlier from the “Use existing” dropdown
- Select a cloud region geographically near to your location to host your VM
Click on “OK”:

Choose a size (“D4S_V3 Standard” is recommended if available) and click “Select”:

Click on “Public IP address” to open the “Settings” panel

Set the IP address to “Static” under “Assignment” and click “OK”:
注解
This is so the IP address for your node does not change frequently in the global network map.

Next toggle “Network Security Group” to advanced and click on “Network security group (firewall)”:

Add the following inbound rules for ports 8080 (webserver), and 10002-10003 for the P2P and RPC ports used by the Corda node respectively:
Destination port ranges: 10002, Priority: 1041 Name: Port_10002
Destination port ranges: 10003, Priority: 1042 Name: Port_10003
Destination port ranges: 8080, Priority: 1043 Name: Port_8080
Destination port ranges: 22, Priority: 1044 Name: Port_22
注解
The priority has to be unique number in the range 900 (highest) and 4096 (lowest) priority. Make sure each rule has a unique priority or there will be a validation failure and error message.

Click “OK” and “OK” again on the “Settings” panel:

Click “Create” and wait a few minutes for your instance to be provisioned and start running:

STEP 3: Connect to your VM and set up the environment¶
Once your instance is running click on the “Connect” button and copy the ssh command:

Enter the ssh command into your terminal. At the prompt, type “yes” to continue connecting and then enter the password you configured earlier to log into the remote VM:

STEP 4: Download and set up your Corda node¶
Now that your Azure environment is configured you can switch to the Testnet web application and click “Copy” to get a one-time installation script.
注解
If you have not already set up your account on Testnet, please visit https://testnet.corda.network and sign up.
注解
You can generate as many Testnet identites as you like by refreshing this page to generate a new one-time link.

In the terminal of your cloud instance, paste the command you just copied to install and run your Corda node:
sudo ONE_TIME_DOWNLOAD_KEY=YOUR_UNIQUE_DOWNLOAD_KEY_HERE bash -c "$(curl -L https://testnet.corda.network/api/user/node/install.sh)"
警告
This command will execute the install script as ROOT on your cloud instance. You may wish to examine the script prior to executing it on your machine.
You can follow the progress of the installation by typing the following command in your terminal:
tail -f /opt/corda/logs/node-<VM-NAME>.log
Once the node has booted up, you can navigate to the external web address of the instance on port 8080:
http://<PUBLIC-IP-ADDRESS>:8080/
If everything is working, you should see the following:

Testing your deployment¶
To test that your deployment is working correctly, follow the instructions in Using the Node Explorer to test a Corda node on Corda Testnet to set up the Finance CorDapp and issue cash to a counterparty.
This will also demonstrate how to install a custom CorDapp.
Deploying Corda to Corda Testnet from an AWS Cloud Platform VM¶
目录
This document explains how to deploy a Corda node to AWS that can connect directly to the Corda Testnet. A self service download link can be obtained from https://testnet.corda.network. This document will describe how to set up a virtual machine on the AWS Cloud Platform to deploy your pre-configured Corda node and automatically connnect to Testnet.
Pre-requisites¶
- Ensure you have a registered Amazon AWS account which can create virtual machines and you are logged on to the AWS console: https://console.aws.amazon.com.
Deploy Corda node¶
Browse to https://console.aws.amazon.com and log in with your AWS account.
STEP 1: Launch a new virtual machine
Click on Launch a virtual machine with EC2.

In the quick start wizard scroll down and select the most recent Ubuntu machine image as the Amazon Machine Image (AMI).

Select the instance type (for example t2.xlarge).

Configure a couple of other settings before we review and launch
Under the storage tab (Step 4) increase the storage to 40GB:

Configure the security group (Step 6) to open the firewall ports which Corda uses.

Add a firewall rule for port range 10002-10003 and allow connection from Anywhere. Add another rule for the webserver on port 8080.
Click on the Review and Launch button then if everything looks ok click Launch.
You will be prompted to set up keys to securely access the VM remotely over ssh. Select “Create a new key pair” from the drop down and enter a name for the key file. Click download to get the keys and keep them safe on your local machine.
注解
These keys are just for connecting to your VM and are separate from the keys Corda will use to sign transactions. These keys will be generated as part of the download bundle.

Click “Launch Instances”.
Click on the link to go to the Instances pages in the AWS console where after a few minutes you will be able to see your instance running.

STEP 2: Set up static IP address
On AWS a permanent IP address is called an Elastic IP. Click on the “Elastic IP” link in the navigation panel on the left hand side of the console and then click on “Allocate new address”:

Follow the form then once the address is allocated click on “Actions” then “Associate address”:

Then select the instance you created for your Corda node to attach the IP address to.
STEP 3: Connect to your VM and set up the environment
In the instances console click on “Connect” and follow the instructions to connect to your instance using ssh.


STEP 4: Download and set up your Corda node
Now your AWS environment is configured you can switch back to the Testnet web application and click on the copy to clipboard button to get a one time installation script.
注解
If you have not already set up your account on Testnet then please visit https://testnet.corda.network and sign up.

You can generate as many Testnet identites as you like by refreshing this page to generate a new one time link.
In the terminal of your cloud instance paste the command you just copied to install and run your unique Corda instance on that instance:
sudo ONE_TIME_DOWNLOAD_KEY=YOUR_UNIQUE_DOWNLOAD_KEY_HERE bash -c "$(curl -L https://testnet.corda.network/api/user/node/install.sh)"
警告
This command will execute the install script as ROOT on your cloud instance. You may wish to examine the script prior to executing it on your machine.
You can follow the progress of the installation by typing the following command in your terminal:
tail -f /opt/corda/logs/node-<VM-NAME>.log
Once the node has booted up you can navigate to the external web address of the instance on port 8080. If everything is working you should see the following:

Testing your deployment¶
To test your deployment is working correctly follow the instructions in Using the Node Explorer to test a Corda node on Corda Testnet to set up the Finance CorDapp and issue cash to a counterparty.
This will also demonstrate how to install a custom CorDapp.
Deploying Corda to Corda Testnet from a Google Cloud Platform VM¶
目录
This document explains how to deploy a Corda node to Google Cloud Platform that can connect directly to the Corda Testnet. A self service download link can be obtained from https://testnet.corda.network. This document will describe how to set up a virtual machine on the Google Cloud Platform (GCP) to deploy your pre-configured Corda node and automatically connnect to Testnet.
Pre-requisites¶
- Ensure you have a registered Google Cloud Platform account with billing enabled (https://cloud.google.com/billing/docs/how-to/manage-billing-account) which can create virtual machines under your subscription(s) and you are logged on to the GCP console: https://console.cloud.google.com.
Deploy Corda node¶
Browse to https://console.cloud.google.com and log in with your Google credentials.
STEP 1: Create a GCP Project
In the project drop down click on the plus icon to create a new project to house your Corda resources.



Enter a project name and click Create.
STEP 2: Launch the VM
In the left hand side nav click on Compute Engine.

Click on Create Instance.

Fill in the form with the desired VM specs:
Recommended minimum 4vCPU with 15GB memory and 40GB Persistent disk. Ubuntu 16.04 LTS.
Allow full API access.
Dont worry about firewall settings as you will configure those later.

Click Create and wait a few sections for your instance to provision and start running.
STEP 3: Connect to your VM and set up the environment
Once your instance is running click on the SSH button to launch a cloud SSH terminal in a new window.


Run the following to configure the firewall to allow Corda traffic
gcloud compute firewall-rules create nodetonode --allow tcp:10002
gcloud compute firewall-rules create nodetorpc --allow tcp:10003
gcloud compute firewall-rules create webserver --allow tcp:8080
Promote the ephemeral IP address associated with this instance to a static IP address.
First check the region and select the one you are using from the list:
gcloud compute regions list
Find your external IP:
gcloud compute addresses list
Run this command with the ephemeral IP address as the argument to the –addresses flag and the region:
gcloud compute addresses create corda-node --addresses 35.204.53.61 --region europe-west4
STEP 4: Download and set up your Corda node
Now your GCP environment is configured you can switch to the Testnet web application and click on the copy to clipboard button to get a one time installation script.
注解
If you have not already set up your account on Testnet then please visit https://testnet.corda.network and sign up.

You can generate as many Testnet identites as you like by refreshing this page to generate a new one time link.
In the terminal of your cloud instance paste the command you just copied to install and run your unique Corda instance:
sudo ONE_TIME_DOWNLOAD_KEY=YOUR_UNIQUE_DOWNLOAD_KEY_HERE bash -c "$(curl -L https://testnet.corda.network/api/user/node/install.sh)"
警告
This command will execute the install script as ROOT on your cloud instance. You may wish to examine the script prior to executing it on your machine.
You can follow the progress of the installation by typing the following command in your terminal:
tail -f /opt/corda/logs/node-<VM-NAME>.log
Once the node has booted up you can navigate to the external web address of the instance on port 8080. If everything is working you should see the following:

Testing your deployment¶
To test your deployment is working correctly follow the instructions in Using the Node Explorer to test a Corda node on Corda Testnet to set up the Finance CorDapp and issue cash to a counterparty.
This will also demonstrate how to install a custom CorDapp.
Deploying Corda to Corda Testnet from your local environment¶
This document explains how to set up your local network to enable a Corda node to connect to the Corda Testnet. This assumes you are downloading a node ZIP from: https://testnet.corda.network.
Pre-requisites¶
- Register for an account on https://testnet.corda.network.
Set up your local network¶
For a Corda node to be able to connect to the Corda Testnet and be reachable by counterparties on that network it needs to be reachable on the open internet. Corda is a server which requires an externally visible IP address and several ports in order to operate correctly.
We recommend running your Coda node on cloud infrastructure. If you wish to run Corda on your local machine then you will need to configure your network to enable the Corda node to be reachable from the internet.
注解
You will need access to your network router/gateway to the internet. If you do not have direct access then contact your administrator.
The following steps will describe how to use port forwarding on your router to make sure packets intended for Corda are routed to the right place on your local network.
Set up static IP address local host machine¶
The next steps will configure your router to forward packets to the Corda node, but for this it is required to set the host machine to have a static IP address. If this isn’t done, and the network is using DHCP dynamic address allocation then the next time the host machine is rebooted it may be on a different IP and the port forwarding will no longer work.
Please consult your operating system documentation for instructions on setting a static IP on the host machine.
Set up port forwarding on your router¶
Port forwarding is a method of making a computer on your network accessible to computers on the Internet, even though it is behind a router.
注解
All routers are slightly different and you will need to consult the documentation for your specific make and model.
Log in to the admin page of your router (often 192.168.0.1
) in your
browser bar.
注解
Router administration IP and log in credentials are usually on the bottom or side of your router.
Navigate to the port forwarding
section of the admin console.
Add rules for the following ports which Corda uses:
10002
10003
8080
注解
These ports are the defaults for Testnet which are specified
in the node.conf. If these conflict with existing services
on your host machine they can be changed in the
/opt/corda/node.conf
file.
For each rule you will also typically have to specify the rule name, the static IP address of the host machine we configured earlier (the same in each case) and the protocol (which is TCP in all cases here).
Please consult your router documentation for specific details on enabling port forwarding.
Open firewall ports¶
If you are operating a firewall on your host machine or local network you will also need to open the above ports for incoming traffic.
Please consult your firewall documentation for details.
Optional: Configure a static external IP address¶
Corda expects nodes to have stable addresses over long periods of time. ISPs typically assign dynamic IP addresses to a router and so if your router is rebooted it may not obtain the same external IP and therefore your Corda node will change its address on the Testnet.
You can request a static IP address from your ISP however this may incur a cost.
If the IP address does change then this doesn’t cause issues but it will result in an update to the network map which then needs to be propagated to all peers in the network. There may be some delay in the ability to transact while this happens.
警告
Corda nodes are expected to be online all the time and will send a heartbeat to the network map server to indicate they are operational. If they go offline for a period of time (~24 hours in the case of Testnet) then the node will be removed from the network map. Any nodes which have queued messages for your node will drop these messages, they won’t be delivered and unexpected behaviour may occur.
Test if the ports are open¶
You can use a port checking tool to make sure the ports are open properly.
Download and install your node¶
Navigate to https://testnet.corda.network/platform.
Click on the Download
button and wait for the ZIP
file to download:

Unzip the file in your Corda root directory:
mkdir corda
cd corda
cp <PATH_TO_DOWNLOAD>/node.zip .
unzip node.zip
cd node
Run the run-corda.sh
script to start your Corda node.
./run-corda.sh
Congratulations! You now have a running Corda node on Testnet.
警告
It is possible to copy the node.zip
file from your local machine to any other host machine and run the Corda node from there. Do not run multiple copies of the same node (i.e. with the same identity). If a new copy of the node appears on the network then the network map server will interpret this as a change in the address of the node and route traffic to the most recent instance. Any states which are on the old node will no longer be available and undefined behaviour may result. Please provision a new node from the application instead.
Testing your deployment¶
To test your deployment is working correctly follow the instructions in Using the Node Explorer to test a Corda node on Corda Testnet to set up the Finance CorDapp and issue cash to a counterparty.
Using the Node Explorer to test a Corda node on Corda Testnet¶
This document will explain how to test the installation of a Corda node on Testnet.
Prerequisites¶
This guide assumes you have deployed a Corda node to the Corda Testnet.
注解
If you need to set up a node on Testnet first please follow the instructions: Joining Corda Testnet.
Get the testing tools¶
To run the tests and make sure your node is connecting correctly to the network you will need to download and install a couple of resources.
Log into your Cloud VM via SSH.
Stop the Corda node(s) running on your cloud instance.
ps aux | grep corda.jar | awk '{ print $2 }' | xargs sudo kill
Download the finance CorDapp
In the terminal on your cloud instance run:
wget https://ci-artifactory.corda.r3cev.com/artifactory/corda-releases/net/corda/corda-finance-contracts/4.1-RC01/corda-finance-contracts-4.1-RC01.jar wget https://ci-artifactory.corda.r3cev.com/artifactory/corda-releases/net/corda/corda-finance-workflows/4.1-RC01/corda-finance-workflows-4.1-RC01.jar
This is required to run some flows to check your connections, and to issue/transfer cash to counterparties. Copy it to the Corda installation location:
sudo cp /home/<USER>/corda-finance-*-4.1-RC01.jar /opt/corda/cordapps/
Run the following to create a config file for the finance CorDapp:
echo "issuableCurrencies = [ USD ]" > /opt/corda/cordapps/config/corda-finance-4.1-RC01.conf
Restart the Corda node:
cd /opt/corda sudo ./run-corda.sh
Your node is now running the finance Cordapp.
注解
You can double-check that the CorDapp is loaded in the log file
/opt/corda/logs/node-<VM-NAME>.log
. This file will list installed apps at startup. Search forLoaded CorDapps
in the logs.Now download the Node Explorer to your LOCAL machine:
注解
Node Explorer is a JavaFX GUI which connects to the node over the RPC interface and allows you to send transactions.
Download the Node Explorer from here:
http://ci-artifactory.corda.r3cev.com/artifactory/corda-releases/net/corda/corda-tools-explorer/4.1-RC01/corda-tools-explorer-4.1-RC01.jar
警告
This Node Explorer is incompatible with the Corda Enterprise distribution and vice versa as they currently use different serialisation schemes (Kryo vs AMQP).
Run the Node Explorer tool on your LOCAL machine.
java -jar corda-tools-explorer-4.1-RC01.jar
Connect to the node¶
To connect to the node you will need:
- The IP address of your node (the public IP of your cloud instance). You can find this in the instance page of your cloud console.
- The port number of the RPC interface to the node, specified in
/opt/corda/node.conf
in therpcSettings
section, (by default this is 10003 on Testnet). - The username and password of the RPC interface of the node, also in the
node.conf
in therpcUsers
section, (by default the username iscordazoneservice
on Testnet).
Click on Connect
to log into the node.
Check your network identity and counterparties¶
Once Explorer has logged in to your node over RPC click on the Network
tab in the side navigation of the Explorer UI:

If your Corda node is correctly configured and connected to the Testnet then you should be able to see the identities of your node, the Testnet notary and the network map listing all the counterparties currently on the network.
Test issuance transaction¶
Now we are going to try and issue some cash to a ‘bank’. Click on the Cash
tab.

Now click on New Transaction
and create an issuance to a known counterparty on the network by filling in the form:

Click Execute
and the transaction will start.

Click on the red X to close the notification window and click on Transactions
tab to see the transaction in progress,
or wait for a success message to be displayed:

Congratulations! You have now successfully installed a CorDapp and executed a transaction on the Corda Testnet.
Setting up a dynamic compatibility zone¶
目录
Do you need to create your own dynamic compatibility zone?¶
By dynamic compatibility zone, we mean a compatibility zone that relies on a network map server to allow nodes to join dynamically, instead of requiring each node to be bootstrapped and have the node-infos distributed manually. While this may sound appealing, think twice before going down this route:
- If you need to test a CorDapp, it is easier to create a test network using the network bootstrapper tool (see below)
- If you need to control who uses your CorDapp, it is easier to apply permissioning by creating a business network (see below)
Testing. Creating a production-ready zone isn’t necessary for testing as you can use the network bootstrapper tool to create all the certificates, keys, and distribute the needed map files to run many nodes. The bootstrapper can create a network locally on your desktop/laptop but it also knows how to automate cloud providers via their APIs and using Docker. In this way you can bring up a simulation of a real Corda network with different nodes on different machines in the cloud for your own testing. Testing this way has several advantages, most obviously that you avoid race conditions in your tests caused by nodes/tests starting before all map data has propagated to all nodes. You can read more about the reasons for the creation of the bootstrapper tool in a blog post on the design thinking behind Corda’s network map infrastructure.
Permissioning. And creating a zone is also unnecessary for imposing permissioning requirements beyond that of the base Corda network. You can control who can use your app by creating a business network. A business network is what we call a coalition of nodes that have chosen to run a particular app within a given commercial context. Business networks aren’t represented in the Corda API at this time, partly because the technical side is so simple. You can create one via a simple three step process:
- Distribute a list of X.500 names that are members of your business network. You can use the reference Business Network Membership Service implementation. Alternatively, you could do this is by hosting a text file with one name per line on your website at a fixed HTTPS URL. You could also write a simple request/response flow that serves the list over the Corda protocol itself, although this requires the business network to have its own node.
- Write a bit of code that downloads and caches the contents of this file on disk, and which loads it into memory in
the node. A good place to do this is in a class annotated with
@CordaService
, because this class can expose aSet<Party>
field representing the membership of your service. - In your flows use
serviceHub.findService
to get a reference to your@CordaService
class, read the list of members and at the start of each flow, throw a FlowException if the counterparty isn’t in the membership list.
In this way you can impose a centrally controlled ACL that all members will collectively enforce.
注解
A production-ready Corda network and a new iteration of the testnet will be available soon.
Why create your own zone?¶
The primary reason to create a zone and provide the associated infrastructure is control over network parameters. These are settings that control Corda’s operation, and on which all users in a network must agree. Failure to agree would create the Corda equivalent of a blockchain “hard fork”. Parameters control things like the root of identity, how quickly users should upgrade, how long nodes can be offline before they are evicted from the system and so on.
Creating a zone involves the following steps:
- Create the zone private keys and certificates. This procedure is conventional and no special knowledge is required: any self-signed set of certificates can be used. A professional quality zone will probably keep the keys inside a hardware security module (as the main Corda network and test networks do).
- Write a network map server.
- Optionally, create a doorman server.
- Finally, you would select and generate your network parameter file.
How to create your own compatibility zone¶
Using an existing network map implementation¶
You can use an existing network map implementation such as the Cordite Network Map Service to create a dynamic compatibility zone.
Creating your own network map implementation¶
Writing a network map server¶
This server implements a simple HTTP based protocol described in the “网络地图” page. The map server is responsible for gathering NodeInfo files from nodes, storing them, and distributing them back to the nodes in the zone. By doing this it is also responsible for choosing who is in and who is out: having a signed identity certificate is not enough to be a part of a Corda zone, you also need to be listed in the network map. It can be thought of as a DNS equivalent. If you want to de-list a user, you would do it here.
It is very likely that your map server won’t be entirely standalone, but rather, integrated with whatever your master user database is.
The network map server also distributes signed network parameter files and controls the rollout schedule for when they become available for download and opt-in, and when they become enforced. This is again a policy decision you will probably choose to place some simple UI or workflow tooling around, in particular to enforce restrictions on who can edit the map or the parameters.
Writing a doorman server¶
This step is optional because your users can obtain a signed certificate in many different ways. The doorman protocol is again a very simple HTTP based approach in which a node creates keys and requests a certificate, polling until it gets back what it expects. However, you could also integrate this process with the rest of your signup process. For example, by building a tool that’s integrated with your payment flow (if payment is required to take part in your zone at all). Alternatively you may wish to distribute USB smartcard tokens that generate the private key on first use, as is typically seen in national PKIs. There are many options.
If you do choose to make a doorman server, the bulk of the code you write will be workflow related. For instance, related to keeping track of an applicant as they proceed through approval. You should also impose any naming policies you have in the doorman process. If names are meant to match identities registered in government databases then that should be enforced here, alternatively, if names can be self-selected or anonymous, you would only bother with a deduplication check. Again it will likely be integrated with a master user database.
Corda does not currently provide a doorman or network map service out of the box, partly because when stripped of the zone specific policy there isn’t much to them: just a basic HTTP server that most programmers will have favourite frameworks for anyway.
The protocol is:
- If $URL =
https://some.server.com/some/path
- Node submits a PKCS#10 certificate signing request using HTTP POST to
$URL/certificate
. It will have a MIME type ofapplication/octet-stream
. TheClient-Version
header is set to be “1.0”. - The server returns an opaque string that references this request (let’s call it
$requestid
, or an HTTP error if something went wrong. - The returned request ID should be persisted to disk, to handle zones where approval may take a long time due to manual intervention being required.
- The node starts polling
$URL/$requestid
using HTTP GET. The poll interval can be controlled by the server returning a response with aCache-Control
header. - If the request is answered with a
200 OK
response, the body is expected to be a zip file. Each file is expected to be a binary X.509 certificate, and the certs are expected to be in order. - If the request is answered with a
204 No Content
response, the node will try again later. - If the request is answered with a
403 Not Authorized
response, the node will treat that as request rejection and give up. - Other response codes will cause the node to abort with an exception.
Setting zone parameters¶
Zone parameters are stored in a file containing a Corda AMQP serialised SignedDataWithCert<NetworkParameters>
object. It is easy to create such a file with a small Java or Kotlin program. The NetworkParameters
object is a
simple data holder that could be read from e.g. a config file, or settings from a database. Signing and saving the
resulting file is just a few lines of code. A full example can be found in NetworkParametersCopier.kt
in the source
tree, but a flavour of it looks like this:
NetworkParameters networkParameters = new NetworkParameters(
4, // minPlatformVersion
Collections.emptyList(), // the `NotaryInfo`s of all the network's notaries
1024 * 1024 * 20, // maxMessageSize
1024 * 1024 * 15, // maxTransactionSize
Instant.now(), // modifiedTime
2, // epoch
Collections.emptyMap() // whitelisted contract code JARs
);
CertificateAndKeyPair signingCertAndKeyPair = loadNetworkMapCA();
SerializedBytes<SignedDataWithCert<NetworkParameters>> bytes = SerializedBytes.from(netMapCA.sign(networkParameters));
Files.copy(bytes.open(), Paths.get("params-file"));
val networkParameters = NetworkParameters(
minimumPlatformVersion = 4,
notaries = listOf(...),
maxMessageSize = 1024 * 1024 * 20 // 20mb, for example.
maxTransactionSize = 1024 * 1024 * 15,
modifiedTime = Instant.now(),
epoch = 2,
... etc ...
)
val signingCertAndKeyPair: CertificateAndKeyPair = loadNetworkMapCA()
val signedParams: SerializedBytes<SignedNetworkParameters> = signingCertAndKeyPair.sign(networkParameters).serialize()
signedParams.open().copyTo(Paths.get("/some/path"))
Each individual parameter is documented in the JavaDocs/KDocs for the NetworkParameters class. The network map certificate is usually chained off the root certificate, and can be created according to the instructions above. Each time the zone parameters are changed, the epoch should be incremented. Epochs are essentially version numbers for the parameters, and they therefore cannot go backwards. Once saved, the new parameters can be served by the network map server.
Selecting parameter values¶
How to choose the parameters? This is the most complex question facing you as a new zone operator. Some settings may seem straightforward and others may involve cost/benefit tradeoffs specific to your business. For example, you could choose to run a validating notary yourself, in which case you would (in the absence of SGX) see all the users’ data. Or you could run a non-validating notary, with BFT fault tolerance, which implies recruiting others to take part in the cluster.
New network parameters will be added over time as Corda evolves. You will need to ensure that when your users upgrade, all the new network parameters are being served. You can ask for advice on the corda-dev mailing list.
Setting up a notary service¶
Corda comes with several notary implementations built-in:
- Single-node: a simple notary service that persists notarisation requests in the node’s database. It is easy to set up and is recommended for testing, and production networks that do not have strict availability requirements.
- Crash fault-tolerant (experimental): a highly available notary service operated by a single party.
- Byzantine fault-tolerant (experimental): a decentralised highly available notary service operated by a group of parties.
Single-node¶
To have a regular Corda node provide a notary service you simply need to set appropriate notary
configuration values
before starting it:
notary : { validating : false }
For a validating notary service specify:
notary : { validating : true }
See 验证 for more details about validating versus non-validating notaries.
For clients to be able to use the notary service, its identity must be added to the network parameters. This will be done automatically when creating the network, if using Network Bootstrapper. See 网络 for more details.
Crash fault-tolerant (experimental)¶
Corda provides a prototype Raft-based highly available notary implementation. You can try it out on our notary demo page. Note that it has known limitations and is not recommended for production use.
Byzantine fault-tolerant (experimental)¶
A prototype BFT notary implementation based on BFT-Smart is available. You can try it out on our notary demo page. Note that it is still experimental and there is active work ongoing for a production ready solution. Additionally, BFT-Smart requires Java serialization which is disabled by default in Corda due to security risks, and it will only work in dev mode where this can be customised.
We do not recommend using it in any long-running test or production deployments.
Official Corda Docker Image¶
Running a node connected to a Compatibility Zone in Docker¶
注解
Requirements: A valid node.conf and a valid set of certificates - (signed by the CZ)
In this example, the certificates are stored at /home/user/cordaBase/certificates
, the node configuration is in /home/user/cordaBase/config/node.conf
and the CorDapps to run are in /path/to/cordapps
docker run -ti \
--memory=2048m \
--cpus=2 \
-v /home/user/cordaBase/config:/etc/corda \
-v /home/user/cordaBase/certificates:/opt/corda/certificates \
-v /home/user/cordaBase/persistence:/opt/corda/persistence \
-v /home/user/cordaBase/logs:/opt/corda/logs \
-v /path/to/cordapps:/opt/corda/cordapps \
-p 10200:10200 \
-p 10201:10201 \
corda/corda-zulu-5.0-snapshot:latest
As the node runs within a container, several mount points are required:
- CorDapps - CorDapps must be mounted at location
/opt/corda/cordapps
- Certificates - certificates must be mounted at location
/opt/corda/certificates
- Config - the node config must be mounted at location
/etc/corda/node.config
- Logging - all log files will be written to location
/opt/corda/logs
If using the H2 database:
- Persistence - the folder to hold the H2 database files must be mounted at location
/opt/corda/persistence
Running a node connected to a Bootstrapped Network¶
注解
Requirements: A valid node.conf, a valid set of certificates, and an existing network-parameters file
In this example, we have previously generated a network-parameters file using the bootstrapper tool, which is stored at /home/user/sharedFolder/network-parameters
docker run -ti \
--memory=2048m \
--cpus=2 \
-v /home/user/cordaBase/config:/etc/corda \
-v /home/user/cordaBase/certificates:/opt/corda/certificates \
-v /home/user/cordaBase/persistence:/opt/corda/persistence \
-v /home/user/cordaBase/logs:/opt/corda/logs \
-v /home/TeamCityOutput/cordapps:/opt/corda/cordapps \
-v /home/user/sharedFolder/node-infos:/opt/corda/additional-node-infos \
-v /home/user/sharedFolder/network-parameters:/opt/corda/network-parameters \
-p 10200:10200 \
-p 10201:10201 \
corda/corda-zulu-5.0-snapshot:latest
There is a new mount /home/user/sharedFolder/node-infos:/opt/corda/additional-node-infos
which is used to hold the nodeInfo
of all the nodes within the network.
As the node within the container starts up, it will place it’s own nodeInfo into this directory. This will allow other nodes also using this folder to see this new node.
Generating configs and certificates¶
It is possible to utilize the image to automatically generate a sensible minimal configuration for joining an existing Corda network.
Joining TestNet¶
注解
Requirements: A valid registration for TestNet and a one-time code for joining TestNet.
docker run -ti \
-e MY_PUBLIC_ADDRESS="corda-node.example.com" \
-e ONE_TIME_DOWNLOAD_KEY="bbcb189e-9e4f-4b27-96db-134e8f592785" \
-e LOCALITY="London" -e COUNTRY="GB" \
-v /home/user/docker/config:/etc/corda \
-v /home/user/docker/certificates:/opt/corda/certificates \
corda/corda-zulu-5.0-snapshot:latest config-generator --testnet
$MY_PUBLIC_ADDRESS
will be the public address that this node will be advertised on.
$ONE_TIME_DOWNLOAD_KEY
is the one-time code provided for joining TestNet.
$LOCALITY
and $COUNTRY
must be set to the values provided when joining TestNet.
When the container has finished executing config-generator
the following will be true
- A skeleton, but sensible minimum node.conf is present in
/home/user/docker/config
- A set of certificates signed by TestNet in
/home/user/docker/certificates
It is now possible to start the node using the generated config and certificates
docker run -ti \
--memory=2048m \
--cpus=2 \
-v /home/user/docker/config:/etc/corda \
-v /home/user/docker/certificates:/opt/corda/certificates \
-v /home/user/docker/persistence:/opt/corda/persistence \
-v /home/user/docker/logs:/opt/corda/logs \
-v /home/user/corda/samples/bank-of-corda-demo/build/nodes/BankOfCorda/cordapps:/opt/corda/cordapps \
-p 10200:10200 \
-p 10201:10201 \
corda/corda-zulu-5.0-snapshot:latest
Joining an existing Compatibility Zone¶
注解
Requirements: A Compatibility Zone, the Zone Trust Root and authorisation to join said Zone.
It is possible to use the image to automate the process of joining an existing Zone as detailed here
The first step is to obtain the Zone Trust Root, and place it within a directory. In the below example, the Trust Root is stored at /home/user/docker/certificates/network-root-truststore.jks
.
It is possible to configure the name of the Trust Root file by setting the TRUST_STORE_NAME
environment variable in the container.
docker run -ti --net="host" \
-e MY_LEGAL_NAME="O=EXAMPLE,L=Berlin,C=DE" \
-e MY_PUBLIC_ADDRESS="corda.example-hoster.com" \
-e NETWORKMAP_URL="https://map.corda.example.com" \
-e DOORMAN_URL="https://doorman.corda.example.com" \
-e NETWORK_TRUST_PASSWORD="trustPass" \
-e MY_EMAIL_ADDRESS="cordauser@r3.com" \
-v /home/user/docker/config:/etc/corda \
-v /home/user/docker/certificates:/opt/corda/certificates \
corda/corda-zulu-5.0-snapshot:latest config-generator --generic
Several environment variables must also be passed to the container to allow it to register:
MY_LEGAL_NAME
- The X500 to use when generating the config. This must be the same as registered with the Zone.MY_PUBLIC_ADDRESS
- The public address to advertise the node on.NETWORKMAP_URL
- The address of the Zone’s network map service (this should be provided to you by the Zone).DOORMAN_URL
- The address of the Zone’s doorman service (this should be provided to you by the Zone).NETWORK_TRUST_PASSWORD
- The password to the Zone Trust Root (this should be provided to you by the Zone).MY_EMAIL_ADDRESS
- The email address to use when generating the config. This must be the same as registered with the Zone.
There are some optional variables which allow customisation of the generated config:
MY_P2P_PORT
- The port to advertise the node on (defaults to 10200). If changed, ensure the container is launched with the correct published ports.MY_RPC_PORT
- The port to open for RPC connections to the node (defaults to 10201). If changed, ensure the container is launched with the correct published ports.
Once the container has finished performing the initial registration, the node can be started as normal
docker run -ti \
--memory=2048m \
--cpus=2 \
-v /home/user/docker/config:/etc/corda \
-v /home/user/docker/certificates:/opt/corda/certificates \
-v /home/user/docker/persistence:/opt/corda/persistence \
-v /home/user/docker/logs:/opt/corda/logs \
-v /home/user/corda/samples/bank-of-corda-demo/build/nodes/BankOfCorda/cordapps:/opt/corda/cordapps \
-p 10200:10200 \
-p 10201:10201 \
corda/corda-zulu-5.0-snapshot:latest
Azure Marketplace¶
To help you design, build and test applications on Corda, called CorDapps, a Corda network can be deployed on the Microsoft Azure Marketplace
This Corda network offering builds a pre-configured network of Corda nodes as Ubuntu virtual machines (VM). The network comprises of a Notary node and up to nine Corda nodes using a version of Corda of your choosing. The following guide will also show you how to load a simple Yo! CorDapp which demonstrates the basic principles of Corda. When you are ready to go further with developing on Corda and start making contributions to the project head over to the Corda.net.
Pre-requisites¶
- Ensure you have a registered Microsoft Azure account which can create virtual machines under your subscription(s) and you are logged on to the Azure portal (portal.azure.com)
- It is recommended you generate a private-public SSH key pair (see here)
Deploying the Corda Network¶
Browse to portal.azure.com, login and search the Azure Marketplace for Corda and select ‘Corda Single Ledger Network’.
Click the ‘Create’ button.
STEP 1: Basics
Define the basic parameters which will be used to pre-configure your Corda nodes.
- Resource prefix: Choose an appropriate descriptive name for your Corda nodes. This name will prefix the node hostnames
- VM user name: This is the user login name on the Ubuntu VMs. Leave it as azureuser or define your own
- Authentication type: Select ‘SSH public key’, then paste the contents of your SSH public key file (see pre-requisites, above) into the box. Alternatively select ‘Password’ to use a password of your choice to administer the VM
- Restrict access by IP address: Leave this as ‘No’ to allow access from any internet host, or provide an IP address or a range of IP addresses to limit access
- Subscription: Select which of your Azure subscriptions you want to use
- Resource group: Choose to ‘Create new’ and provide a useful name of your choice
- Location: Select the geographical location physically closest to you

Click ‘OK’
STEP 2: Network Size and Performance
Define the number of Corda nodes in your network and the size of VM.
- Number of Network Map nodes: There can only be one Network Map node in this network. Leave as ‘1’
- Number of Notary nodes: There can only be one Notary node in this network. Leave as ‘1’
- Number of participant nodes: This is the number of Corda nodes in your network. At least 2 nodes in your network is recommended (so you can send transactions between them). You can specific 1 participant node and use the Notary node as a second node. There is an upper limit of 9
- Storage performance: Leave as ‘Standard’
- Virtual machine size: The size of the VM is automatically adjusted to suit the number of participant nodes selected. It is recommended to use the suggested values

Click ‘OK’
STEP 3: Corda Specific Options
Define the version of Corda you want on your nodes and the type of notary.
- Corda version (as seen in Maven Central): Select the version of Corda you want your nodes to use from the drop down list. The version numbers can be seen in Maven Central, for example 0.11.0
- Notary type: Select either ‘Non Validating’ (notary only checks whether a state has been previously used and marked as historic) or ‘Validating’ (notary performs transaction verification by seeing input and output states, attachments and other transaction information). More information on notaries can be found here

Click ‘OK’
STEP 4: Summary
A summary of your selections is shown.

Click ‘OK’ for your selection to be validated. If everything is ok you will see the message ‘Validation passed’
Click ‘OK’
STEP 5: Buy
Review the Azure Terms of Use and Privacy Policy and click ‘Purchase’ to buy the Azure VMs which will host your Corda nodes.
The deployment process will start and typically takes 8-10 minutes to complete.
Once deployed click ‘Resources Groups’, select the resource group you defined in Step 1 above and click ‘Overview’ to see the virtual machine details. The names of your VMs will be pre-fixed with the resource prefix value you defined in Step 1 above.
The Network Map Service node is suffixed nm0. The Notary node is suffixed not0. Your Corda participant nodes are suffixed node0, node1, node2 etc. Note down the Public IP address for your Corda nodes. You will need these to connect to UI screens via your web browser:

Using the Yo! CorDapp¶
Loading the Yo! CordDapp on your Corda nodes lets you send simple Yo! messages to other Corda nodes on the network. A Yo! message is a very simple transaction. The Yo! CorDapp demonstrates:
- how transactions are only sent between Corda nodes which they are intended for and are not shared across the entire network by using the network map
- uses a pre-defined flow to orchestrate the ledger update automatically
- the contract imposes rules on the ledger updates
- Loading the Yo! CorDapp onto your nodes
The nodes you will use to send and receive Yo messages require the Yo! CorDapp jar file to be saved to their cordapps directory.
Connect to one of your Corda nodes (make sure this is not the Notary node) using an SSH client of your choice (e.g. Putty) and log into the virtual machine using the public IP address and your SSH key or username / password combination you defined in Step 1 of the Azure build process. Type the following command:
Build the yo cordapp sample which you can find here: https://github.com/corda/samples/tree/release-V4/yo-cordapp and install it in the cordapp directory.
Now restart Corda and the Corda webserver using the following commands or restart your Corda VM from the Azure portal:
sudo systemctl restart corda
sudo systemctl restart corda-webserver
Repeat these steps on other Corda nodes on your network which you want to send or receive Yo messages.
- Verify the Yo! CorDapp is running
Open a browser tab and browse to the following URL:
http://(public IP address):(port)/web/yo
where (public IP address) is the public IP address of one of your Corda nodes on the Azure Corda network and (port) is the web server port number for your Corda node, 10004 by default
You will now see the Yo! CordDapp web interface:

- Sending a Yo message via the web interface
In the browser window type the following URL to send a Yo message to a target node on your Corda network:
http://(public IP address):(port)/api/yo/yo?target=(legalname of target node)
where (public IP address) is the public IP address of one of your Corda nodes on the Azure Corda network and (port) is the web server port number for your Corda node, 10004 by default and (legalname of target node) is the Legal Name for the target node as defined in the node.conf file, for example:
http://40.69.40.42:10004/api/yo/yo?target=Corda 0.10.1 Node 1 in tstyo2
An easy way to see the Legal Names of Corda nodes on the network is to use the peers screen:
http://(public IP address):(port)/api/yo/peers

- Viewing Yo messages
To see Yo! messages sent to a particular node open a browser window and browse to the following URL:
http://(public IP address):(port)/api/yo/yos

Viewing logs¶
Users may wish to view the raw logs generated by each node, which contain more information about the operations performed by each node.
You can access these using an SSH client of your choice (e.g. Putty) and logging into the virtual machine using the public IP address. Once logged in, navigate to the following directory for Corda logs (node-xxxxxx):
/opt/corda/logs
And navigate to the following directory for system logs (syslog):
/var/log
You can open log files with any text editor.


Next Steps¶
Now you have built a Corda network and used a basic Corda CorDapp do go and visit the dedicated Corda website
Or to join the growing Corda community and get straight into the Corda open source codebase, head over to the Github Corda repo
AWS Marketplace¶
To help you design, build and test applications on Corda, called CorDapps, a Corda network AMI can be deployed from the AWS Marketplace. Instructions on running Corda nodes can be found here.
This Corda network offering builds a pre-configured network of Corda nodes as Ubuntu virtual machines (VM). The network consists of a Notary node and three Corda nodes using version 1 of Corda. The following guide will also show you how to load one of four Corda Sample apps which demonstrates the basic principles of Corda. When you are ready to go further with developing on Corda and start making contributions to the project head over to the Corda.net.
Pre-requisites¶
- Ensure you have a registered AWS account which can create virtual machines under your subscription(s) and you are logged on to the AWS portal
- It is recommended you generate a private-public SSH key pair (see here)
Deploying a Corda Network¶
Browse to the AWS Marketplace and search for Corda.
Follow the instructions to deploy the AMI to an instance of EC2 which is in a region near to your location.
Build and Run a Sample CorDapp¶
Once the instance is running ssh into the instance using your keypair
cd ~/dev
There are 4 sample apps available by default
ubuntu@ip-xxx-xxx-xxx-xxx:~/dev$ ls -la
total 24
drwxrwxr-x 6 ubuntu ubuntu 4096 Nov 13 21:48 .
drwxr-xr-x 8 ubuntu ubuntu 4096 Nov 21 16:34 ..
drwxrwxr-x 11 ubuntu ubuntu 4096 Oct 31 19:02 cordapp-example
drwxrwxr-x 9 ubuntu ubuntu 4096 Nov 13 21:48 obligation-cordapp
drwxrwxr-x 11 ubuntu ubuntu 4096 Nov 13 21:48 oracle-example
drwxrwxr-x 8 ubuntu ubuntu 4096 Nov 13 21:48 yo-cordapp
cd into the Corda sample you would like to run. For example:
cd cordapp-example/
Follow instructions for the specific sample at https://www.corda.net/samples to build and run the Corda sample For example: with cordapp-example (IOU app) the following commands would be run:
./gradlew deployNodes
./kotlin-source/build/nodes/runnodes
Then start the Corda webserver
find ~/dev/cordapp-example/kotlin-source/ -name corda-webserver.jar -execdir sh -c 'java -jar {} &' \;
You can now interact with your running CorDapp. See the instructions here.
Next Steps¶
Now you have built a Corda network and used a basic Corda Cordapp do go and visit the dedicated Corda website
Additional support is available on Stack Overflow and the Corda Slack channel.
You can build and run any other Corda samples or your own custom CorDapp here.
Or to join the growing Corda community and get straight into the Corda open source codebase, head over to the Github Corda repo
Load testing¶
This section explains how to apply random load to nodes to stress test them. It also allows the specification of disruptions that strain different resources, allowing us to inspect the nodes’ behaviour under extreme conditions.
The load-testing framework is incomplete and is not part of CI currently, but the basic pieces are there.
Configuration of the load testing cluster¶
The load-testing framework currently assumes the following about the node cluster:
- The nodes are managed as a systemd service
- The node directories are the same across the cluster
- The messaging ports are the same across the cluster
- The executing identity of the load-test has SSH access to all machines
- There is a single network map service node
- There is a single notary node
- Some disruptions also assume other tools (like openssl) to be present
Note that these points could and should be relaxed as needed.
The load test Main expects a single command line argument that points to a configuration file specifying the cluster hosts and optional overrides for the default configuration:
Running the load tests¶
In order to run the loadtests you need to have an active SSH-agent running with a single identity added that has SSH access to the loadtest cluster.
You can use either IntelliJ or the gradle command line to start the tests.
To use gradle with configuration file: ./gradlew tools:loadtest:run -Ploadtest-config=PATH_TO_LOADTEST_CONF
To use gradle with system properties: ./gradlew tools:loadtest:run -Dloadtest.mode=LOAD_TEST -Dloadtest.nodeHosts.0=node0.myhost.com
注解
You can provide or override any configuration using the system properties, all properties will need to be prefixed with loadtest.
.
To use IntelliJ simply run Main.kt with the config path supplied as an argument or system properties as vm options.
Configuration of individual load tests¶
The load testing configurations are not set-in-stone and are meant to be played with to see how the nodes react.
There are a couple of top-level knobs to tweak test behaviour:
The one thing of note is disruptionPatterns
, which may be used to specify ways of disrupting the normal running of the load tests.
Disruptions run concurrently in loops on randomly chosen nodes filtered by nodeFilter
at somewhat random intervals.
As an example take strainCpu
which overutilises the processor:
We can use this by specifying a DisruptionSpec
in the load test’s RunParameters
:
This means every 5-10 seconds at least one randomly chosen nodes’ cores will be spinning 100% for 10 seconds.
How to write a load test¶
A load test is basically defined by a random datastructure generator that specifies a unit of work a node should perform, a function that performs this work, and a function that predicts what state the node should end up in by doing so:
LoadTest
is parameterised over T
, the unit of work, and S
, the state type that aims to track remote node states. As an example let’s look at the Self Issue test. This test simply creates Cash Issues from nodes to themselves, and then checks the vault to see if the numbers add up:
The unit of work SelfIssueCommand
simply holds an Issue and a handle to a node where the issue should be submitted. The generate
method should provide a generator for these.
The state SelfIssueState
then holds a map from node identities to a Long that describes the sum quantity of the generated issues (we fixed the currency to be USD).
The invariant we want to hold then simply is: The sum of submitted Issues should be the sum of the quantities in the vaults.
The interpret
function should take a SelfIssueCommand
and update SelfIssueState
to reflect the change we’re expecting in the remote nodes. In our case this will simply be adding the issued amount to the corresponding node’s Long.
The execute
function should perform the action on the cluster. In our case it will simply take the node handle and submit an RPC request for the Issue.
The gatherRemoteState
function should check the actual remote nodes’ states and see whether they conflict with our local predictions (and should throw if they do). This function deserves its own paragraph.
gatherRemoteState
gets as input handles to all the nodes, and the current predicted state, or null if this is the initial gathering.
The reason it gets the previous state boils down to allowing non-deterministic predictions about the nodes’ remote states. Say some piece of work triggers an asynchronous notification of a node. We need to account both for the case when the node hasn’t received the notification and for the case when it has. In these cases S
should somehow represent a collection of possible states, and gatherRemoteState
should “collapse” the collection based on the observations it makes. Of course we don’t need this for the simple case of the Self Issue test.
The last parameter isConsistent
is used to poll for eventual consistency at the end of a load test. This is not needed for self-issuance.
Stability Test¶
Stability test is one variation of the load test, instead of flooding the nodes with request, the stability test uses execution frequency limit to achieve a constant execution rate.
To run the stability test, set the load test mode to STABILITY_TEST (mode=STABILITY_TEST
in config file or -Dloadtest.mode=STABILITY_TEST
in system properties).
The stability test will first self issue cash using StabilityTest.selfIssueTest
and after that it will randomly pay and exit cash using StabilityTest.crossCashTest
for P2P testing, unlike the load test, the stability test will run without any disruption.
Shell extensions for CLI Applications¶
Installing shell extensions¶
Users of bash
or zsh
can install an alias and auto-completion for Corda applications that contain a command line interface. Run:
java -jar <name-of-JAR>.jar install-shell-extensions
Then, either restart your shell, or for bash
users run:
. ~/.bashrc
Or, for zsh
run:
. ~/.zshrc
You will now be able to run the command line application from anywhere by running the following:
<alias> --<option>
For example, for the Corda node, install the shell extensions using
java -jar corda-4.1-RC01.jar install-shell-extensions
And then run the node by running:
corda --<option>
Upgrading shell extensions¶
Once the shell extensions have been installed, you can upgrade them in one of two ways.
Overwrite the existing JAR with the newer version. The next time you run the application, it will automatically update the completion file. Either restart the shell or see above for instructions on making the changes take effect immediately.
If you wish to use a new JAR from a different directory, navigate to that directory and run:
java -jar <name-of-JAR>
Which will update the alias to point to the new location, and update command line completion functionality. Either restart the shell or see above for instructions on making the changes take effect immediately.
List of existing CLI applications¶
Description | Alias | JAR Name |
---|---|---|
Corda node | corda --<option> |
corda-4.1-RC01.jar |
Network bootstrapper | bootstrapper --<option> |
corda-tools-network-bootstrapper-4.1-RC01.jar |
Standalone shell | corda-shell --<option> |
corda-tools-shell-cli-4.1-RC01.jar |
Blob inspector | blob-inspector --<option> |
corda-tools-blob-inspector-4.1-RC01.jar |
Corda Network¶
Introduction to Corda Network¶
[Corda Network](https://corda.network/) is a publicly-available internet of Corda nodes operated by network participants. Each node is identified by a certificate issued by the network’s identity service, and will also be discoverable on a network map.
Corda Network enables interoperability – the exchange of data or assets via a secure, efficient internet layer – in a way that isn’t possible with separate, isolated Corda networks. A common trust root surrounds all transactions, and a consistent set of network parameters ensures all participants may transact with each other.
The network went live in December 2018, and is currently governed by R3. An independent, not-for-profit foundation has been set up to govern the network, and a transitional board will be selected from initial participants in Spring 2019, which will oversee the foundation until democratic elections are held a year later. See the [governance model](https://corda.network/governance/governance-guidelines.html) for more detail.
The network will support many sub-groups of participants running particular CorDapps (sometimes referred to as ‘business networks’), and these groups will often have a co-ordinating party (the ‘business network operator’) who manages the distribution of the app and rules, including membership, for its use. There is a clear separation between areas of control for the network as a whole and for individual business networks. Like the internet, Corda Network intends to exist as a background utility.
The main benefit of Corda Network for participants is being able to move cash, digital assets, and identity data from one application or line of business to another. Business network operators also benefit by being able to access network-wide services, and reuse the [trust root](https://corda.network/trust-root/index.html) and network services, instead of building and managing their own.
The Corda Network website has a [high level overview](https://corda.network/participation/index.html) of the joining process.
Key services¶
Identity Service¶
The Identity Service controls admissions of participants into Corda Network. The service receives certificate signing requests (CSRs) from prospective network participants (sometimes via a business network operator) and reviews the information submitted. A digitally signed participation certificate is returned if:
- The participant meets the requirements specified in the [bylaws and policies](https://corda.network/policy/admission-criteria.html)
of the foundation (broadly speaking, limited to sanction screening only); * The participant agrees to Corda Network participant [terms of use](https://corda.network/participation/terms-of-use.html).
The Corda Network node can then use the participation certificate to register itself with the Network Map Service.
Network Map Service¶
The Network Map Service accepts digitally signed documents describing network routing and identifying information from nodes, based on the participation certificates signed by the Identity Service, and makes this information available to all Corda Network nodes.
Notary Service¶
The Corda design separates correctness consensus from uniqueness consensus, and the latter is provided by one or more Notary Services. The Notary will digitally sign a transaction presented to it, provided no transaction referring to any of the same inputs has been previously signed by the Notary, and the transaction timestamp is within bounds.
Business network operators and network participants may choose to enter into legal agreements which rely on the presence of such digital signatures when determining whether a transaction to which they are party, or upon the details of which they otherwise rely, is to be treated as ‘confirmed’ in accordance with the terms of the underlying agreement.
Support Service¶
The Support Service is provided to participants and business network operators to manage and resolve inquiries and incidents relating to the Identity Service, Network Map Service and Notary Services.
CRL configuration¶
The Corda Network provides an endpoint serving an empty certificate revocation list for TLS-level certificates. This is intended for deployments that do not provide a CRL infrastructure but still require strict CRL mode checking. In order to use this, add the following to your configuration file:
tlsCertCrlDistPoint = "https://crl.cordaconnect.org/cordatls.crl"
tlsCertCrlIssuer = "C=US, L=New York, O=R3 HoldCo LLC, OU=Corda, CN=Corda Root CA"
This set-up ensures that the TLS-level certificates are embedded with the CRL distribution point referencing the CRL issued by R3. In cases where a proprietary CRL infrastructure is provided those values need to be changed accordingly.
Corda Network: UAT Environment¶
For owners of tested CorDapps with a firm plan to take them into production, a bespoke UAT environment can be provided by R3. Here, such CorDapps can be further tested in the network configuration they will experience in production, utilising relevant Corda Network Services (including the Identity Operator, and trusted notaries).
Corda UAT is not intended for customers’ full test cycles, as it is expected that the bulk of CorDapp testing will occur in simpler network configurations run by the CorDapp provider, but is available for testing of functionally complete and tested CorDapps in realistic network settings to simulate the real-world business environment, including the production settings of network parameters, Corda network services and supported Corda versions.
UAT is therefore more aligned to the testing of the operational characteristics of networked CorDapps rather than their specific functional features, although we recognise there can be overlap between the two. Realistic test data is therefore expected to be used and may include data copied from production environments and hence representing real world entities and business activities. It will be up to the introducer of such data to ensure that all relevant data protection legislation is complied with and, in particular, that the terms and conditions under which Corda Network Services processes such data is suitable for their needs. All test data will be cleared down from Corda Network Services on the completion of testing.
More information about UAT will continue to be uploaded on this site or related sub-sites.
Joining the UAT environment¶
The below joining steps assume the potential participant is joining the UAT environment directly, and as such is not “sponsoring†or onboarding other participants. If this is the case, please contact your Corda representative for how to ‘sponsor’ end-participants onto UAT.
Pre-requisites:
Technical * One or more physical or virtual machines upon which to deploy Corda, with compatible operating system and a compatible Java version (e.g. Oracle JDK 8u131+) * Corda software - either Open Source or Corda Enterprise (license from R3) * A static external IP addresses must be available for each machine on which Corda will be run.
Business * Appropriate contractual terms have been agreed for access to the Services * Access to the appropriate environment has been agreed with your project representative with sufficient advance notice (4 weeks standard but may be longer if you have special service requirements) to ensure appropriate SLAs can be in place. Your project representative will be able to supply the booking template.
Note: Corda Network UAT is an R3 owned and operated environment and service designed to support parties intending to join Corda Network proper with realistic network test facilities. In contrast, Corda Network is a production network governed by an [independent Foundation](https://corda.network/governance/index.html) and has no responsibility for Corda Network UAT. Corda Network UAT seeks to provide a test environment which is as close as possible to Corda Network in its make-up and operation.
Steps to join UAT environment¶
Steps to join are outlined on the [Corda Network UAT microsite](http://uat.network.r3.com/pages/joining/joining.html)
For further questions on this process, please contact us - preferably on the mailing list: https://groups.io/g/corda-network
Contributing¶
Corda is an open-source project and contributions are welcome!
Contributing philosophy¶
目录
Mission¶
Corda is an open source project with the aim of developing an enterprise-grade distributed ledger platform for business across a variety of industries. Corda was designed and developed to apply the concepts of blockchain and smart contract technologies to the requirements of modern business transactions. It is unique in its aim to build a platform for businesses to transact freely with any counter-party while retaining strict privacy. Corda provides an implementation of this vision in a code base which others are free to build on, contribute to or innovate around. The mission of Corda is further detailed in the Corda introductory white paper.
The project is supported and maintained by the R3 Alliance, or R3 for short, which consists of over two hundred firms working together to build and maintain this open source enterprise-grade blockchain platform.
Community Locations¶
The Corda maintainers, developers and extended community make active use of the following channels:
The Corda Slack team for general community discussion, and in particular:
- The
#contributing
channel for discussions around contributing - The
#design
channel for discussions around the platform’s design
- The
The corda-dev mailing list for discussion regarding Corda’s design and roadmap
The GitHub issues board for reporting platform bugs and potential enhancements
The Stack Overflow corda tag for specific technical questions
Project Leadership and Maintainers¶
The leader of this project is currently Mike Hearn, who is also the Lead Platform Engineer at R3. The project leader appoints the project’s Community Maintainers, who are responsible for merging community contributions into the code base and acting as points of contact.
In addition to the project leader and community maintainer(s), developers employed by R3 who have passed our technical interview process have commit privileges to the repo. All R3 contributions undergo peer review, which is documented in public in GitHub, before they can be merged; they are held to the same standard as all other contributions. The community is encouraged both to observe and participate in this review process.
Community maintainers¶
Current community maintainers:
Joel Dudley - Contact me:
- On the Corda Slack team, either in the
#community
channel or by direct message using the handle@joel
- By email: joel.dudley at r3.com
- On the Corda Slack team, either in the
We anticipate additional maintainers joining the project in the future from across the community.
Existing Contributors¶
Over two hundred individuals have contributed to the development of Corda. You can find a full list of contributors in the CONTRIBUTORS.md list.
Transparency and Conflict Policy¶
The project is supported and maintained by the R3 Alliance, which consists of over two hundred firms working together to build and maintain this open source enterprise-grade blockchain platform. We develop in the open and publish our Jira to give everyone visibility. R3 also maintains and distributes a commercial distribution of Corda. Our vision is that distributions of Corda be compatible and interoperable, and our contribution and code review guidelines are designed in part to enable this.
As the R3 Alliance is maintainer of the project and also develops a commercial distribution of Corda, what happens if a member of the community contributes a feature which the R3 team have implemented only in their commercial product? How is this apparent conflict managed? Our approach is simple: if the contribution meets the standards for the project (see above), then the existence of a competing commercial implementation will not be used as a reason to reject it. In other words, it is our policy that should a community feature be contributed which meets the criteria above, we will accept it or work with the contributor to merge/reconcile it with the commercial feature.
How to contribute¶
目录
Identifying an area to contribute¶
There are several ways to identify an area where you can contribute to Corda:
- The easiest is just to message one of the Community Maintainers saying “I want to help!”. They’ll work with you to find an area for you to contribute
- If you have a specific contribution in mind, confirm whether the contribution is appropriate first by reaching out in the
#contributing
channel of the Corda Slack or contacting one of the Community Maintainers directly - If you do not have a specific contribution in mind, you can also browse the issues labelled as
help wanted
on the Corda GitHub issues page- Issues that additionally have the
good first issue
label are considered ideal for first-timers
- Issues that additionally have the
Contribution guidelines¶
We believe one of the things that makes Corda special is its coherent design and we seek to retain this defining characteristic. From the outset we defined some guidelines to ensure new contributions only ever enhance the project:
- Quality: Code in the Corda project should meet the Corda coding style guidelines, with sufficient test-cases, descriptive commit messages, evidence that the contribution does not break any compatibility commitments or cause adverse feature interactions, and evidence of high-quality peer-review
- Size: The Corda project’s culture is one of small pull-requests, regularly submitted. The larger a pull-request, the more likely it is that you will be asked to resubmit as a series of self-contained and individually reviewable smaller PRs
- Scope: We try to ensure the Corda project remains coherent and focused so we ask that the feature’s scope is within the definition specified in the Corda Technical Whitepaper
- Maintainability: If the feature will require ongoing maintenance (eg support for a particular brand of database), we may ask you to accept responsibility for maintaining this feature
- Non-duplicative: If the contribution duplicates features that already exist or are already in progress, you may be asked to work with the project maintainers to reconcile this. As the major contributor to Corda, many employees of R3 will be working on features at any given time. To avoid surprises and foster transparency, our Jira work tracking system is public. If in doubt, reach out to one of the Community Maintainers
Making the required changes¶
You should make your changes as follows:
- Create a fork of the master branch of the Corda repo
- Clone the fork to your local machine
- Build Corda by following the instructions here
- Make the changes, in accordance with the code style guide
Things to check¶
- Make sure your error handling is up to scratch: Errors should not leak to the UI. When writing tools intended for end users, like the
node or command line tools, remember to add
try
/catch
blocks. Throw meaningful errors. For example, instead of throwing anOutOfMemoryError
, use the error message to indicate that a file is missing, a network socket was unreachable, etc. Tools should not dump stack traces to the end user - Look for API breaks: We have an automated checker tool that runs as part of our continuous integration pipeline and helps a lot, but it can’t catch semantic changes where the behavior of an API changes in ways that might violate app developer expectations
- Suppress inevitable compiler warnings: Compiler warnings should have a
@Suppress
annotation on them if they’re expected and can’t be avoided - Remove deprecated functionality: When deprecating functionality, make sure you remove the deprecated uses in the codebase
- Avoid making formatting changes as you work: In Kotlin 1.2.20, new style guide rules were implemented. The new Kotlin style guide is significantly more detailed than before and IntelliJ knows how to implement those rules. Re-formatting the codebase creates a lot of diffs that make merging more complicated
- Things to consider when writing CLI apps: Make sure any changes to CLI applications conform to the CLI UX Guide
Extending the flow state machine¶
If you are interested in extending the flow state machine, you can find instructions on how to do this here.
Testing the changes¶
You should test your changes as follows:
- Add tests: Unit tests and integration tests for external API changes must cover Java and Kotlin. For internal API changes these tests can be scaled back to Kotlin only
- Run the tests: Your changes must pass the tests described here
- Perform manual testing: Before sending that code for review, spend time poking and prodding the tool and thinking, “Would the experience of using this feature make my mum proud of me?”. Automated tests are not a substitute for dogfooding
- Build against the master branch: You can test your changes against CorDapps defined in other repos by following the instructions here
- Run the API scanner: Your changes must also not break compatibility with existing public API. We have an API scanning tool which runs as part of the build process which can be used to flag up any accidental changes, which is detailed here
Updating the docs¶
You should document any changes to Corda’s public API as follows:
- Add comments and javadocs/kdocs. API functions must have javadoc/kdoc comments and sentences must be terminated
with a full stop. We also start comments with capital letters, even for inline comments. Where Java APIs have
synonyms (e.g.
%d
and%date
), we prefer the longer form for legibility reasons. You can configure your IDE to highlight these in bright yellow - Update the relevant .rst file(s)
- Include the change in the changelog if the change is external and therefore visible to CorDapp developers and/or node operators
- Build the docs locally and check that the resulting .html files (under
docs/build/html
) for the modified render correctly - If relevant, add a sample. Samples are one of the key ways in which users learn about what the platform can do. If you add a new API or feature and don’t update the samples, your work will be much less impactful
Merging the changes back into Corda¶
You should merge the changes back into Corda as follows:
- Create a pull request from your fork to the
master
branch of the Corda repo - In the PR comments box:
- Complete the pull-request checklist:
- [ ] Have you run the unit, integration and smoke tests as described here? https://docs.corda.net/head/testing.html
- [ ] If you added/changed public APIs, did you write/update the JavaDocs?
- [ ] If the changes are of interest to application developers, have you added them to the changelog, and potentially release notes?
- [ ] If you are contributing for the first time, please read the agreement in CONTRIBUTING.md now and add to this Pull Request that you agree to it.
- Add a clear description of the purpose of the PR
- Add the following statement to confirm that your contribution is your own original work: “I hereby certify that my contribution is in accordance with the Developer Certificate of Origin (https://developercertificate.org/).”
- Request a review by reaching out in the
#contributing
channel of the Corda Slack or contacting one of the Community Maintainers directly - The reviewer will either:
- Accept and merge your PR
- Leave comments requesting changes via the GitHub PR interface
- You should make the changes by pushing directly to your existing PR branch. The PR will be updated automatically
(Optional) Open an additional PR to add yourself to the contributors list
- The format is generally
firstname surname (company)
, but the company can be omitted if desired
- The format is generally
Developer Certificate of Origin¶
All contributions to this project are subject to the terms of the Developer Certificate of Origin, available here and reproduced below:
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
Building Corda¶
These instructions are for downloading and building the Corda code locally. If you only wish to develop CorDapps for use on Corda, you don’t need to do this, follow the instructions at 快速搭建 CorDapp 开发环境 and use the precompiled binaries.
Windows¶
Java¶
- Visit http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
- Scroll down to “Java SE Development Kit 8uXXX” (where “XXX” is the latest minor version number)
- Toggle “Accept License Agreement”
- Click the download link for jdk-8uXXX-windows-x64.exe (where “XXX” is the latest minor version number)
- Download and run the executable to install Java (use the default settings)
- Add Java to the PATH environment variable by following the instructions at https://docs.oracle.com/javase/7/docs/webnotes/install/windows/jdk-installation-windows.html#path
- Open a new command prompt and run
java -version
to test that Java is installed correctly
Git¶
- Visit https://git-scm.com/download/win
- Click the “64-bit Git for Windows Setup” download link.
- Download and run the executable to install Git (use the default installation values) and make a note of the installation directory.
- Open a new command prompt and type
git --version
to test that Git is installed correctly
Building Corda¶
- Open a command prompt
- Run
git clone https://github.com/corda/corda.git
- Run
gradlew build
Debian/Ubuntu Linux¶
These instructions were tested on Ubuntu Server 18.04 LTS. This distribution includes git
and python
so only the following steps are required:
Java¶
- Run
sudo add-apt-repository ppa:webupd8team/java
from the terminal. Press ENTER when prompted. - Run
sudo apt-get update
- Then run
sudo apt-get install oracle-java8-installer
. Press Y when prompted and agree to the licence terms. - Run
java --version
to verify that java is installed correctly
Building Corda¶
- Open the terminal
- Run
git clone https://github.com/corda/corda.git
- Run
./gradlew build
Testing your changes¶
Automated tests¶
Corda has a suite of tests that any contributing developers must maintain and extend when adding new code.
There are several test suites:
- Unit tests: These are traditional unit tests that should only test a single code unit, typically a method or class.
- Integration tests: These tests should test the integration of small numbers of units, preferably with mocked out services.
- Smoke tests: These are full end to end tests which start a full set of Corda nodes and verify broader behaviour.
- Other: These include tests such as performance tests, stress tests, etc, and may be in an external repo.
Running the automated tests¶
These tests are mostly written with JUnit and can be run via gradle
:
- Windows: Run
gradlew test integrationTest smokeTest
- Unix/Mac OSX: Run
./gradlew test integrationTest smokeTest
Before creating a pull request please make sure these pass.
Manual testing¶
You should manually test anything that would be impacted by your changes. The areas that usually need to be manually tested and when are as follows:
- Node startup - changes in the
node
ornode:capsule
project in both the Kotlin or gradle or thecordformation
gradle plugin. - Sample project - changes in the
samples
project. eg; changing the IRS demo means you should manually test the IRS demo. - Explorer - changes to the
tools/explorer
project. - Demobench - changes to the
tools/demobench
project.
How to manually test each of these areas differs and is currently not fully specified. For now the best thing to do is to ensure the program starts, that you can interact with it, and that no exceptions are generated in normal operation.
Checking API stability¶
We have committed not to alter Corda’s API so that developers will not have to keep rewriting their CorDapps with each new Corda release. The stable Corda modules are listed here. Our CI process runs an “API Stability” check for each GitHub pull request in order to check that we don’t accidentally introduce an API-breaking change.
Build Process¶
As part of the build process the following commands are run for each PR:
$ gradlew generateApi
$ .ci/check-api-changes.sh
This bash
script has been tested on both MacOS and various Linux distributions, it can also be run on Windows with the
use of a suitable bash emulator such as git bash. The script’s return value is the number of API-breaking changes that it
has detected, and this should be zero for the check to pass. The maximum return value is 255, although the script will still
correctly report higher numbers of breaking changes.
There are three kinds of breaking change:
- Removal or modification of existing API, i.e. an existing class, method or field has been either deleted or renamed, or its signature somehow altered.
- Addition of a new method to an interface or abstract class. Types that have been annotated as
@DoNotImplement
are excluded from this check. (This annotation is also inherited across subclasses and sub-interfaces.) - Exposure of an internal type via a public API. Internal types are considered to be anything in a
*.internal.
package or anything in a module that isn’t in the stable modules list here.
Developers can execute these commands themselves before submitting their PR, to ensure that they haven’t inadvertently broken Corda’s API.
How it works¶
The generateApi
Gradle task writes a summary of Corda’s public API into the file build/api/api-corda-4.1-RC01.txt
.
The .ci/check-api-changes.sh
script then compares this file with the contents of .ci/api-current.txt
, which is a
managed file within the Corda repository.
The Gradle task itself is implemented by the API Scanner plugin. More information on the API Scanner plugin is available here.
Updating the API¶
As a rule, api-current.txt
should only be updated by the release manager for each Corda release.
We do not expect modifications to api-current.txt
as part of normal development. However, we may sometimes need to adjust
the public API in ways that would not break developers’ CorDapps but which would be blocked by the API Stability check.
For example, migrating a method from an interface into a superinterface. Any changes to the API summary file should be
included in the PR, which would then need explicit approval from either Mike Hearn, Rick Parker or Matthew Nesbit.
注解
If you need to modify api-current.txt
, do not re-generate the file on the master branch. This will include new API that
hasn’t been released or committed to, and may be subject to change. Manually change the specific line or lines of the
existing committed API that has changed.
Building the documentation¶
The documentation is under the docs
folder, and is written in reStructuredText format. Documentation in HTML format
is pre-generated, as well as code documentation, and this can be done automatically via a provided script.
Building Using the Docker Image¶
This is the method used during the build. If you run:
./gradlew makeDocs
This will download a docker image from docker hub and run the build locally inside that by mounting quite a bit of the docs directory at various places inside the image.
This image is pre-built with the dependencies that were in requirements.txt at the time of the docker build.
Changing requirements¶
If you want to upgrade, say, the version of sphinx that we’re using, you must:
- Upgrade the version number in requirements.txt
- Build a new docker image:
cd docs && docker build -t corda/docs-builder:latest -f docs_builder/Dockerfile .
- post doing this the build will run using your image locally
- you can also push this to the docker registry if you have the corda keys
- you can run
docker run -it corda/docs-builder /bin/bash
to interactively look in the build docker image (e.g. to see what is in the requirements.txt file)
Building from the Command Line (non-docker)¶
Requirements¶
In order to build the documentation you will need a development environment set up as described under Building Corda.
You will also need additional dependencies based on your O/S which are detailed below.
Windows¶
Git, bash and make¶
In order to build the documentation for Corda you need a bash
emulator with make
installed and accessible from the command prompt. Git for
Windows ships with a version of MinGW that contains a bash
emulator, to which you can download and add a Windows port of
make
, instructions for which are provided below. Alternatively you can install a full version of MinGW from here.
- Go to ezwinports and click the download for
make-4.2.1-without-guile-w32-bin.zip
- Navigate to the Git installation directory (by default
C:\Program Files\Git
), openmingw64
- Unzip the downloaded file into this directory, but do NOT overwrite/replace any existing files
- Add the Git
bin
directory to your system PATH environment variable (by defaultC:\Program Files\Git\bin
) - Open a new command prompt and run
bash
to test that you can access the Git bash emulator - Type
make
to make sure it has been installed successfully (you should get an error likemake: *** No targets specified and no makefile found. Stop.
)
Python, Pip and VirtualEnv¶
- Visit https://www.python.org/downloads
- Scroll down to the most recent v2 release (tested with v.2.7.15) and click the download link
- Download the “Windows x86-64 MSI installer”
- Run the installation, making a note of the Python installation directory (defaults to
c:\Python27
) - Add the Python installation directory (e.g.
c:\Python27
) to your system PATH environment variable - Add the Python scripts sub-directory (e.g.
c:\Python27\scripts
) to your system PATH environment variable - Open a new command prompt and check you can run Python by running
python --version
- Check you can run pip by running
pip --version
- Install
virtualenv
by runningpip install virtualenv
from the commandline - Check you can run
virualenv
by runningvirtualenv --version
from the commandline.
LaTeX¶
Corda requires LaTeX to be available for building the documentation. The instructions below are for installing TeX Live but other distributions are available.
- Visit https://tug.org/texlive/
- Click download
- Download and run
install-tl-windows.exe
- Keep the default options (simple installation is fine)
- Open a new command prompt and check you can run
pdflatex
by runningpdflatex --version
Debian/Ubuntu Linux¶
These instructions were tested on Ubuntu Server 18.04 LTS. This distribution includes git
and python
so only the following steps are required:
Pip/VirtualEnv¶
- Run
sudo apt-get install python-pip
- Run
pip install virtualenv
- Run
pip --version
to verify that pip is installed correctly - Run
virtualenv --version
to verify that virtualenv is installed correctly
LaTeX¶
Corda requires LaTeX to be available for building the documentation. The instructions below are for installing TeX Live but other distributions are available.
- Run
sudo apt-get install texlive-full
Build¶
Once the requirements are installed, you can automatically build the HTML format user documentation, PDF, and the API documentation by running the following script:
// On Windows
gradlew buildDocs
// On Mac and Linux
./gradlew buildDocs
Alternatively you can build non-HTML formats from the docs
folder.
However, running make
from the command line requires further dependencies to be installed. When building in Gradle they
are installed in a python virtualenv, so they will need explicitly installing
by running:
pip install -r requirements.txt
Change directory to the docs
folder and then run the following to see a list of all available formats:
make
For example to produce the documentation in HTML format run:
make html
Code style guide¶
This document explains the coding style used in the Corda repository. You will be expected to follow these recommendations when submitting patches for review. Please take the time to read them and internalise them, to save time during code review.
What follows are recommendations and not rules. They are in places intentionally vague, so use your good judgement when interpreting them.
1. General style¶
We use the standard Kotlin coding style from JetBrains.
In Kotlin code, KDoc is used rather than JavaDoc. It’s very similar except it uses Markdown for formatting instead of HTML tags.
We target Java 8 and use the latest Java APIs whenever convenient. We use java.time.Instant
to represent timestamps
and java.nio.file.Path
to represent file paths.
Never apply any design pattern religiously. There are no silver bullets in programming and if something is fashionable, that doesn’t mean it’s always better. In particular:
- Use functional programming patterns like map, filter, fold only where it’s genuinely more convenient. Never be afraid to use a simple imperative construct like a for loop or a mutable counter if that results in more direct, English-like code.
- Use immutability when you don’t anticipate very rapid or complex changes to the content. Immutability can help avoid bugs, but over-used it can make code that has to adjust fields of an immutable object (in a clone) hard to read and stress the garbage collector. When such code becomes a widespread pattern it can lead to code that is just generically slow but without hotspots.
- The trade-offs between various thread safety techniques are complex, subtle, and no technique is always superior to the others. Our code uses a mix of locks, worker threads and messaging depending on the situation.
1.1 Line Length and Spacing¶
We aim for line widths of no more than 120 characters. That is wide enough to avoid lots of pointless wrapping but narrow enough that with a widescreen monitor and a 12 point fixed width font (like Menlo) you can fit two files next to each other. This is not a rigidly enforced rule and if wrapping a line would be excessively awkward, let it overflow. Overflow of a few characters here and there isn’t a big deal: the goal is general convenience.
Where the number of parameters in a function, class, etc. causes an overflow past the end of the first line, they should be structured one parameter per line.
Code is vertically dense, blank lines in methods are used sparingly. This is so more code can fit on screen at once.
We use spaces and not tabs, with indents being 4 spaces wide.
1.2 Naming¶
Naming generally follows Java standard style (pascal case for class names, camel case for methods, properties and
variables). Where a class name describes a tuple, “And” should be included in order to clearly indicate the elements are
individual parts, for example PartyAndReference
, not PartyReference
(which sounds like a reference to a
Party
).
2. Comments¶
We like them as long as they add detail that is missing from the code. Comments that simply repeat the story already told by the code are best deleted. Comments should:
- Explain what the code is doing at a higher level than is obtainable from just examining the statement and surrounding code.
- Explain why certain choices were made and the trade-offs considered.
- Explain how things can go wrong, which is a detail often not easily seen just by reading the code.
- Use good grammar with capital letters and full stops. This gets us in the right frame of mind for writing real explanations of things.
When writing code, imagine that you have an intelligent colleague looking over your shoulder asking you questions as you go. Think about what they might ask, and then put your answers in the code.
Don’t be afraid of redundancy, many people will start reading your code in the middle with little or no idea of what it’s about (e.g. due to a bug or a need to introduce a new feature). It’s OK to repeat basic facts or descriptions in different places if that increases the chance developers will see something important.
API docs: all public methods, constants and classes must have doc comments in either JavaDoc or KDoc. API docs should:
- Explain what the method does in words different to how the code describes it.
- Always have some text, annotation-only JavaDocs don’t render well. Write “Returns a blah blah blah” rather than “@returns blah blah blah” if that’s the only content (or leave it out if you have nothing more to say than the code already says).
- Illustrate with examples when you might want to use the method or class. Point the user at alternatives if this code is not always right.
- Make good use of {@link} annotations.
Bad JavaDocs look like this:
/** @return the size of the Bloom filter. */
public int getBloomFilterSize() {
return block;
}
Good JavaDocs look like this:
/**
* Returns the size of the current {@link BloomFilter} in bytes. Larger filters have
* lower false positive rates for the same number of inserted keys and thus lower privacy,
* but bandwidth usage is also correspondingly reduced.
*/
public int getBloomFilterSize() { ... }
We use C-style (/** */
) comments for API docs and we use C++ style comments (//
) for explanations that are
only intended to be viewed by people who read the code.
When writing multi-line TODO comments, indent the body text past the TODO line, for example:
// TODO: Something something
// More stuff to do
// Etc. etc.
3. Threading¶
Classes that are thread safe should be annotated with the @ThreadSafe
annotation. The class or method comments
should describe how threads are expected to interact with your code, unless it’s obvious because the class is
(for example) a simple immutable data holder.
Code that supports callbacks or event listeners should always accept an Executor
argument that defaults to
MoreExecutors.directThreadExecutor()
(i.e. the calling thread) when registering the callback. This makes it easy
to integrate the callbacks with whatever threading environment the calling code expects, e.g. serialised onto a single
worker thread if necessary, or run directly on the background threads used by the class if the callback is thread safe
and doesn’t care in what context it’s invoked.
In the prototyping code it’s OK to use synchronised methods i.e. with an exposed lock when the use of locking is quite trivial. If the synchronisation in your code is getting more complex, consider the following:
- Is the complexity necessary? At this early stage, don’t worry too much about performance or scalability, as we’re exploring the design space rather than making an optimal implementation of a design that’s already nailed down.
- Could you simplify it by making the data be owned by a dedicated, encapsulated worker thread? If so, remember to think about flow control and what happens if a work queue fills up: the actor model can often be useful but be aware of the downsides and try to avoid explicitly defining messages, prefer to send closures onto the worker thread instead.
- If you use an explicit lock and the locking gets complex, and always if the class supports callbacks, use the cycle detecting locks from the Guava library.
- Can you simplify some things by using thread-safe collections like
CopyOnWriteArrayList
orConcurrentHashMap
? These data structures are more expensive than their non-thread-safe equivalents but can be worth it if it lets us simplify the code.
Immutable data structures can be very useful for making it easier to reason about multi-threaded code. Kotlin makes it easy to define these via the “data” attribute, which auto-generates a copy() method. That lets you create clones of an immutable object with arbitrary fields adjusted in the clone. But if you can’t use the data attribute for some reason, for instance, you are working in Java or because you need an inheritance hierarchy, then consider that making a class fully immutable may result in very awkward code if there’s ever a need to make complex changes to it. If in doubt, ask. Remember, never apply any design pattern religiously.
We have an extension to the Executor
interface called AffinityExecutor
. It is useful when the thread safety
of a piece of code is based on expecting to be called from a single thread only (or potentially, a single thread pool).
AffinityExecutor
has additional methods that allow for thread assertions. These can be useful to ensure code is not
accidentally being used in a multi-threaded way when it didn’t expect that.
4. Assertions and errors¶
We use them liberally and we use them at runtime, in production. That means we avoid the “assert” keyword in Java,
and instead prefer to use the check()
or require()
functions in Kotlin (for an IllegalStateException
or
IllegalArgumentException
respectively), or the Guava Preconditions.check
method from Java. Assertions should
always have messages associated with them describing what went wrong, even if it’s just a copy of the expression (but
ideally is more helpful).
We define new exception types liberally. We prefer not to provide English language error messages in exceptions at the throw site, instead we define new types with any useful information as fields, with a toString() method if really necessary. In other words, don’t do this:
throw new Exception("The foo broke")
instead do this
class FooBrokenException extends Exception {}
throw new FooBrokenException()
The latter is easier to catch and handle if later necessary, and the type name should explain what went wrong.
Note that Kotlin does not require exception types to be declared in method prototypes like Java does.
5. Properties¶
Where we want a public property to have one super-type in public and another sub-type in private (or internal), perhaps to expose additional methods with a greater level of access to the code within the enclosing class, the style should be:
class PrivateFoo : PublicFoo
private val _foo = PrivateFoo()
val foo: PublicFoo get() = _foo
Notably:
- The public property should have an explicit and more restrictive type, most likely a super class or interface.
- The private, backed property should begin with underscore but otherwise have the same name as the public property. The underscore resolves a potential property name clash, and avoids naming such as “privateFoo”. If the type or use of the private property is different enough that there is no naming collision, prefer the distinct names without an underscore.
- The underscore prefix is not a general pattern for private properties.
- The public property should not have an additional backing field but use “get()” to return an appropriate copy of the private field.
- The public property should optionally wrap the returned value in an immutable wrapper, such as Guava’s immutable collection wrappers, if that is appropriate.
- If the code following “get()” is succinct, prefer a one-liner formatting of the public property as above, otherwise put the “get()” on the line below, indented.
6. Compiler warnings¶
We do not allow compiler warnings, except in the experimental module where the usual standards do not apply and warnings are suppressed. If a warning exists it should be either fixed or suppressed using @SuppressWarnings and if suppressed there must be an accompanying explanation in the code for why the warning is a false positive.
7. When to update the docsite¶
The documentation website (this site) must be updated in any PR that adds or changes something visible to app developers, or people who operate a node. For the avoidance of doubt this includes the following kinds of changes:
- Adding new APIs, shell commands, config file options, command line flags.
- Altering database schemas. You’ll need to write a Liquibase migration script and update the docsite to explain the migration.
- Deprecating existing APIs or design patterns.
- Adding support for new supported backends and modules.
- Changing the Gradle build DSL.
You should additionally update the changelog if a change is risky or may in some way be of interest to users, even if not directly visible.
Because this is a developer platform, many changes are user visible. That means many PRs will require docsite changes. When you review a PR that doesn’t change the docsite, you should be asking yourself “why does this PR not require docs changes” rather than the other way around (“does this PR require changes”), which is easier to forget about.
CLI UX Guide¶
Command line options¶
Command line utilities should use picocli (http://picocli.info) to provide a unified interface and follow the conventions in the picocli documentation, some of the more important of which are repeated below.
Option names¶
- Options should be specified on the command line using a double dash, e.g.
--parameter
. - Options that consist of multiple words should be separated via hyphens e.g.
--my-multiple-word-parameter-name
.
Short names¶
- Where possible a POSIX style short option should be provided for ease of use (see http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap12.html#tag_12_02).
- These should be prefixed with a single hyphen.
- For example
-V
for--verbose
,-d
for--dev-mode
. - Consider adding short options for commands that would be ran regularly as part of troubleshooting/operational processes.
- Short options should not be used for commands that would be used just once, for example initialising/registration type tasks.
- The picocli interface allows combinations of options without parameters, for example,
`-v
and`-d
can be combined as-vd
.
Positional parameters¶
- Parameters specified without an option should ideally all be part of a list.
- For example, in
java -jar test.jar file1 file2 file3
, the parameters file1, file2 and file3 should be a list of files that are all acted on together (e.g. a list of CorDapps).
- For example, in
- Avoid using positional parameters to mean different things, which involves someone remembering in which order things need to be specified.
- For example, avoid
java -jar test.jar configfile1 cordapp1 cordapp2
where parameter 1 is the config file and any subsequent parameters are the CorDapps. - Use
java -jar test.jar cordapp1 cordapp2 --config-file configfile1
instead.
- For example, avoid
Standard options¶
- A
--help
option should be provided which details all possible options with a brief description and any short name equivalents. A-h
short option should also be provided. - A
--version
option that should output the version number of the software. A-V
short option should also be provided. - A
--logging-level
option should be provided which specifies the logging level to be used in any logging files. Acceptable values should beDEBUG
,TRACE
,INFO
,WARN
andERROR
. --verbose
and--log-to-console
options should be provided (both equivalent) which specifies that logging output should be displayed in the console. A-v
short option should also be provided.
Standard subcommands¶
- An
install-shell-extensions
subcommand should be provided that creates and installs a bash completion file.
Defaults¶
- Flags should have sensible defaults.
- Boolean flags should always default to false. Specifying the flag without a parameter should set it to true. For example
--use-something` should be equal to ``--use-something=true
and no option should be equal to--my-flag=false
. - Do a bit of work to figure out reasonable defaults. Nobody likes having to set a dozen flags before the tool will cooperate.
Adding a new option¶
- Boolean options should start with is, has or with. For example,
--is-cheesy
,--with-cheese
,--has-cheese-on
. - Any new options must be documented in the docsite and via the
--help
screen. - Never use acronyms in option names and try and make them as descriptive as possible.
Parameter stability¶
- Avoid removing parameters. If, for some reason, a parameter needs to be renamed, add a new parameter with the new name and deprecate the old parameter, or alternatively
- keep both versions of the parameter. See Backwards Compatibility for more information.
Notes for adding a new a command line application¶
The CordaCliWrapper
base class¶
The CordaCliWrapper
base class from the cliutils
module should be used as a base where practicable, this will provide a set of default options out of the box.
In order to use it, create a class containing your command line options using the syntax provided at (see the picocli website for more information)
import net.corda.cliutils.ExitCodes
import net.corda.cliutils.CordaCliWrapper
class UsefulUtilityExitCodes: ExitCodes {
companion object {
val APPLICATION_SPECIFIC_ERROR_CODE: Int = 100
}
}
class UsefulUtility : CordaCliWrapper(
"useful-utility", // the alias to be used for this utility in bash. When install-shell-extensions is run
// you will be able to invoke this command by running <useful-utility --opts> from the command line
"A command line utility that is super useful!" // A description of this utility to be displayed when --help is run
) {
@Option(names = ["--extra-usefulness", "-e"], // A list of the different ways this option can be referenced
description = ["Use this option to add extra usefulness"] // Help description to be displayed for this option
)
private var extraUsefulness: Boolean = false // This default option will be shown in the help output
override fun runProgram(): Int { // override this function to run the actual program
try {
// do some stuff
} catch (KnownException: ex) {
return UsefulUtilityExitCodes.APPLICATION_SPECIFIC_ERROR_CODE // return a special exit code for known exceptions
}
return UsefulUtilityExitCodes.SUCCESS // this is the exit code to be returned to the system inherited from the ExitCodes base class
}
}
Then in your main()
method:
import net.corda.cliutils.start
fun main(args: Array<String>) {
UsefulUtility().start(args)
}
Application behavior¶
- Set exit codes using exitProcess.
- Zero means success.
- Other numbers mean errors.
- Setting a unique error code (starting from 1) for each thing that can conceivably break makes your tool shell-scripting friendly.
- Make sure all exit codes are documented with recommended remedies where applicable.
- Your
--help
text or other docs should ideally include examples. Writing examples is also a good way to find out if your program requires a dozen flags to do anything. - Don’t print logging output to the console unless the user requested it via a
-–verbose
flag (conventionally shortened to-v
). Logs should be either suppressed or saved to a text file during normal usage, except for errors, which are always OK to print. - Don’t print stack traces to the console. Stack traces can be added to logging files, but the user should see as meaningful error description as possible.
Backwards Compatibility¶
Our commitment to API stability (See Checking API stability for more information) extends to new versions of our CLI tools. Removing and renaming parameters may cause existing scripts users may have written to fail, and should be avoided unless absolutely necessary.
Deprecating command line parameters¶
Command line parameters that are no longer necessary should be deprecated rather than removed. Picocli allows parameters to be hidden by use
of the hidden
option, for example:
import net.corda.cliutils.CordaCliWrapper
class UsefulUtility : CordaCliWrapper("useful-utility", "A command line utility that is super useful!") {
@Option(names = ["--no-longer-useful", "-u"],
hidden = true,
description = ["The option is no longer useful. Don't show it in the help output."]
)
private var noLongerUseful: Boolean = false
override fun runProgram(): Int {
if (noLongerUseful) // print a warning to the log to let the user know the option has been deprecated
logger.warn("The --no-longer-useful option is deprecated, please use the --alternatively-useful option instead")
// do some stuff
return UsefulUtilityExitCodes.SUCCESS
}
}
This will cause the option to still be usable, but means it won’t be shown when --help
is called. As a result, existing scripts dependent
on the parameter will still run, but new users will be directed to the replacement.
Changing the type of existing command line parameters¶
Don’t change the type of an existing command line parameter if that change would not be backwards compatible. For example, adding a value to an enumeration based parameter would be fine, but removing one would not. Instead of changing the type, consider adding a new parameter, deprecating the old parameter as described above, and redirecting inputs from the old parameter to the new parameter internally.
Testing backwards compatibility¶
When adding a new command line tool, a backwards compatibility test should be created by adding the test-cli
as a test dependency of your project
and then creating a test class that extends CliBackwardsCompatibleTest
for the class, like so:
import net.corda.testing.CliBackwardsCompatibleTest
class UsefulUtilityBackwardsCompatibleTest : CliBackwardsCompatibleTest(UsefulUtility::class.java)
The test will search for a YAML file on the class path named <fully.qualified.class.name>.yml
which details the names, types and possible
options of parameters, and compares it to the options of the current class to make sure they are compatible.
In order to generate the file, create and run the test for your application. The test will fail, but the test output
will contain the YAML for the current state of the tool. This can be copied and then pasted into a correctly named .yml
file in the resources directory of the project.
Release process¶
As part of the release process, the release manager should regenerate the YAML files for each command line tool by following the following steps:
- Check out the release branch
- Delete the
<fully.qualified.tool.name>.yml
file for the tool - Re-run the backwards compatibility test for the tool
- Copy the resulting YAML from the test output
- Check out the master branch
- Replace the text in
<fully.qualified.tool.name>.yml
with the text generated on the release branch
Extending the state machine¶
This article explains how to extend the state machine code that underlies flow execution. It is intended for Corda contributors.
How to add suspending operations¶
To add a suspending operation for a simple request-response type function that perhaps involves some external IO we can
use the internal FlowAsyncOperation
interface.
/**
* Interface for arbitrary operations that can be invoked in a flow asynchronously - the flow will suspend until the
* operation completes. Operation parameters are expected to be injected via constructor.
*/
@CordaSerializable
interface FlowAsyncOperation<R : Any> {
/**
* Performs the operation in a non-blocking fashion.
* @param deduplicationId If the flow restarts from a checkpoint (due to node restart, or via a visit to the flow
* hospital following an error) the execute method might be called more than once by the Corda flow state machine.
* For each duplicate call, the deduplicationId is guaranteed to be the same allowing duplicate requests to be
* de-duplicated if necessary inside the execute method.
*/
fun execute(deduplicationId: String): CordaFuture<R>
}
Let’s imagine we want to add a suspending operation that takes two integers and returns their sum. To do this we
implement FlowAsyncOperation
:
class SummingOperation(val a: Int, val b: Int) : FlowAsyncOperation<Int> {
override fun execute(deduplicationId: String): CordaFuture<Int> {
return doneFuture(a + b)
}
}
public final class SummingOperation implements FlowAsyncOperation<Integer> {
private final int a;
private final int b;
@NotNull
@Override
public CordaFuture<Integer> execute(String deduplicationId) {
return CordaFutureImplKt.doneFuture(this.a + this.b);
}
public final int getA() {
return this.a;
}
public final int getB() {
return this.b;
}
public SummingOperation(int a, int b) {
this.a = a;
this.b = b;
}
}
As we can see the constructor of SummingOperation
takes the two numbers, and the execute
function simply returns
a future that is immediately completed by the result of summing the numbers. Note how we don’t use @Suspendable
on
execute
, this is because we’ll never suspend inside this function, the suspension will happen before we’re calling
it.
Note also how the input numbers are stored in the class as fields. This is important, because in the flow’s checkpoint
we’ll store an instance of this class whenever we’re suspending on such an operation. If the node fails or restarts
while the operation is underway this class will be deserialized from the checkpoint and execute
will be called
again.
Now we can use the internal function executeAsync
to execute this operation from a flow.
/** Executes the specified [operation] and suspends until operation completion. */
@Suspendable
fun <T, R : Any> FlowLogic<T>.executeAsync(operation: FlowAsyncOperation<R>, maySkipCheckpoint: Boolean = false): R {
val request = FlowIORequest.ExecuteAsyncOperation(operation)
return stateMachine.suspend(request, maySkipCheckpoint)
}
It simply takes a FlowAsyncOperation
and an optional flag we don’t care about for now. We can use this function in a
flow:
@StartableByRPC
class ExampleSummingFlow : FlowLogic<Int>() {
@Suspendable
override fun call(): Int {
val answer = executeAsync(SummingOperation(1, 2))
return answer // hopefully 3
}
}
@StartableByRPC
public final class ExampleSummingFlow extends FlowLogic<Integer> {
@Suspendable
@NotNull
@Override
public Integer call() {
return FlowAsyncOperationKt.executeAsync(this, new SummingOperation(1, 2), false);
}
}
That’s it! Obviously this is a mostly useless example, but this is the basic code structure one could extend for heavier
computations/other IO. For example the function could call into a CordaService
or something similar. One thing to
note is that the operation executed in execute
must be redoable(= “idempotent”) in case the node fails before the
next checkpoint is committed.
How to test¶
The recommended way to test flows and the state machine is using the Driver DSL. This ensures that you will test your flow with a full node.
@Test
fun summingWorks() {
driver(DriverParameters(startNodesInProcess = true)) {
val aliceUser = User("aliceUser", "testPassword1", permissions = setOf(Permissions.all()))
val alice = startNode(providedName = ALICE_NAME, rpcUsers = listOf(aliceUser)).getOrThrow()
val aliceClient = CordaRPCClient(alice.rpcAddress)
val aliceProxy = aliceClient.start("aliceUser", "testPassword1").proxy
val answer = aliceProxy.startFlow(::ExampleSummingFlow).returnValue.getOrThrow()
assertEquals(3, answer)
}
}
@Test
public final void summingWorks() {
Driver.driver(new DriverParameters(), (DriverDSL dsl) -> {
User aliceUser = new User("aliceUser", "testPassword1",
new HashSet<>(Collections.singletonList(Permissions.all()))
);
Future<NodeHandle> aliceFuture = dsl.startNode(new NodeParameters()
.withProvidedName(ALICE_NAME)
.withRpcUsers(Collections.singletonList(aliceUser))
);
NodeHandle alice = KotlinUtilsKt.getOrThrow(aliceFuture, null);
CordaRPCClient aliceClient = new CordaRPCClient(alice.getRpcAddress());
CordaRPCOps aliceProxy = aliceClient.start("aliceUser", "testPassword1").getProxy();
Future<Integer> answerFuture = aliceProxy.startFlowDynamic(ExampleSummingFlow.class).getReturnValue();
int answer = KotlinUtilsKt.getOrThrow(answerFuture, null);
assertEquals(3, answer);
return Unit.INSTANCE;
});
}
The above will spin up a node and run our example flow.
How to debug issues¶
Let’s assume we made a mistake in our summing operation:
class SummingOperationThrowing(val a: Int, val b: Int) : FlowAsyncOperation<Int> {
override fun execute(deduplicationId: String): CordaFuture<Int> {
throw IllegalStateException("You shouldn't be calling me")
}
}
public final class SummingOperationThrowing implements FlowAsyncOperation<Integer> {
private final int a;
private final int b;
@NotNull
@Override
public CordaFuture<Integer> execute(String deduplicationId) {
throw new IllegalStateException("You shouldn't be calling me");
}
public final int getA() {
return this.a;
}
public final int getB() {
return this.b;
}
public SummingOperationThrowing(int a, int b) {
this.a = a;
this.b = b;
}
}
The operation now throws a rude exception. If we modify the example flow to use this and run the same test we will get a lot of logs about the error condition (as we are in dev mode). The interesting bit looks like this:
[WARN ] 18:38:52,613 [Node thread-1] (DumpHistoryOnErrorInterceptor.kt:39) interceptors.DumpHistoryOnErrorInterceptor.executeTransition - Flow [03ab886e-3fd3-4667-b944-ab6a3b1f90a7] errored, dumping all transitions:
--- Transition of flow [03ab886e-3fd3-4667-b944-ab6a3b1f90a7] ---
Timestamp: 2018-06-01T17:38:52.426Z
Event: DoRemainingWork
Actions:
CreateTransaction
PersistCheckpoint(id=[03ab886e-3fd3-4667-b944-ab6a3b1f90a7], checkpoint=Checkpoint(invocationContext=InvocationContext(origin=RPC(actor=Actor(id=Id(value=aliceUser), serviceId=AuthServiceId(value=NODE_CONFIG), owningLegalIdentity=O=Alice Corp, L=Madrid, C=ES)), trace=Trace(invocationId=26bcf0c3-f1d8-4098-a52d-3780f4095b7a, timestamp: 2018-06-01T17:38:52.234Z, entityType: Invocation, sessionId=393d1175-3bb1-4eb1-bff0-6ba317851260, timestamp: 2018-06-01T17:38:52.169Z, entityType: Session), actor=Actor(id=Id(value=aliceUser), serviceId=AuthServiceId(value=NODE_CONFIG), owningLegalIdentity=O=Alice Corp, L=Madrid, C=ES), externalTrace=null, impersonatedActor=null), ourIdentity=O=Alice Corp, L=Madrid, C=ES, sessions={}, subFlowStack=[Inlined(flowClass=class net.corda.docs.tutorial.flowstatemachines.ExampleSummingFlow, subFlowVersion=CorDappFlow(platformVersion=1, corDappName=net.corda.docs-c6816652-f975-4fb2-aa09-ef1dddea19b3, corDappHash=F4012397D8CF97926B5998E046DBCE16D497318BB87DCED66313912D4B303BB7))], flowState=Unstarted(flowStart=Explicit, frozenFlowLogic=74BA62EC5821EBD4FC4CBE129843F9ED6509DB37E6E3C8F85E3F7A8D84083500), errorState=Clean, numberOfSuspends=0, deduplicationSeed=03ab886e-3fd3-4667-b944-ab6a3b1f90a7))
PersistDeduplicationFacts(deduplicationHandlers=[net.corda.node.internal.FlowStarterImpl$startFlow$startFlowEvent$1@69326343])
CommitTransaction
AcknowledgeMessages(deduplicationHandlers=[net.corda.node.internal.FlowStarterImpl$startFlow$startFlowEvent$1@69326343])
SignalFlowHasStarted(flowId=[03ab886e-3fd3-4667-b944-ab6a3b1f90a7])
CreateTransaction
Continuation: Resume(result=null)
Diff between previous and next state:
isAnyCheckpointPersisted:
false
true
pendingDeduplicationHandlers:
[net.corda.node.internal.FlowStarterImpl$startFlow$startFlowEvent$1@69326343]
[]
isFlowResumed:
false
true
--- Transition of flow [03ab886e-3fd3-4667-b944-ab6a3b1f90a7] ---
Timestamp: 2018-06-01T17:38:52.487Z
Event: Suspend(ioRequest=ExecuteAsyncOperation(operation=net.corda.docs.tutorial.flowstatemachines.SummingOperationThrowing@40f4c23d), maySkipCheckpoint=false, fiber=15EC69204562BB396846768169AD4A339569D97AE841D805C230C513A8BA5BDE, )
Actions:
PersistCheckpoint(id=[03ab886e-3fd3-4667-b944-ab6a3b1f90a7], checkpoint=Checkpoint(invocationContext=InvocationContext(origin=RPC(actor=Actor(id=Id(value=aliceUser), serviceId=AuthServiceId(value=NODE_CONFIG), owningLegalIdentity=O=Alice Corp, L=Madrid, C=ES)), trace=Trace(invocationId=26bcf0c3-f1d8-4098-a52d-3780f4095b7a, timestamp: 2018-06-01T17:38:52.234Z, entityType: Invocation, sessionId=393d1175-3bb1-4eb1-bff0-6ba317851260, timestamp: 2018-06-01T17:38:52.169Z, entityType: Session), actor=Actor(id=Id(value=aliceUser), serviceId=AuthServiceId(value=NODE_CONFIG), owningLegalIdentity=O=Alice Corp, L=Madrid, C=ES), externalTrace=null, impersonatedActor=null), ourIdentity=O=Alice Corp, L=Madrid, C=ES, sessions={}, subFlowStack=[Inlined(flowClass=class net.corda.docs.tutorial.flowstatemachines.ExampleSummingFlow, subFlowVersion=CorDappFlow(platformVersion=1, corDappName=net.corda.docs-c6816652-f975-4fb2-aa09-ef1dddea19b3, corDappHash=F4012397D8CF97926B5998E046DBCE16D497318BB87DCED66313912D4B303BB7))], flowState=Started(flowIORequest=ExecuteAsyncOperation(operation=net.corda.docs.tutorial.flowstatemachines.SummingOperationThrowing@40f4c23d), frozenFiber=15EC69204562BB396846768169AD4A339569D97AE841D805C230C513A8BA5BDE), errorState=Clean, numberOfSuspends=1, deduplicationSeed=03ab886e-3fd3-4667-b944-ab6a3b1f90a7))
PersistDeduplicationFacts(deduplicationHandlers=[])
CommitTransaction
AcknowledgeMessages(deduplicationHandlers=[])
ScheduleEvent(event=DoRemainingWork)
Continuation: ProcessEvents
Diff between previous and next state:
checkpoint.numberOfSuspends:
0
1
checkpoint.flowState:
Unstarted(flowStart=Explicit, frozenFlowLogic=74BA62EC5821EBD4FC4CBE129843F9ED6509DB37E6E3C8F85E3F7A8D84083500)
Started(flowIORequest=ExecuteAsyncOperation(operation=net.corda.docs.tutorial.flowstatemachines.SummingOperationThrowing@40f4c23d), frozenFiber=15EC69204562BB396846768169AD4A339569D97AE841D805C230C513A8BA5BDE)
isFlowResumed:
true
false
--- Transition of flow [03ab886e-3fd3-4667-b944-ab6a3b1f90a7] ---
Timestamp: 2018-06-01T17:38:52.549Z
Event: DoRemainingWork
Actions:
ExecuteAsyncOperation(operation=net.corda.docs.tutorial.flowstatemachines.SummingOperationThrowing@40f4c23d)
Continuation: ProcessEvents
Diff between previous and intended state:
null
Diff between previous and next state:
checkpoint.errorState:
Clean
Errored(errors=[FlowError(errorId=-8704604242619505379, exception=java.lang.IllegalStateException: You shouldn't be calling me)], propagatedIndex=0, propagating=false)
--- Transition of flow [03ab886e-3fd3-4667-b944-ab6a3b1f90a7] ---
Timestamp: 2018-06-01T17:38:52.555Z
Event: DoRemainingWork
Actions:
Continuation: ProcessEvents
Diff between previous and next state:
null
--- Transition of flow [03ab886e-3fd3-4667-b944-ab6a3b1f90a7] ---
Timestamp: 2018-06-01T17:38:52.556Z
Event: StartErrorPropagation
Actions:
ScheduleEvent(event=DoRemainingWork)
Continuation: ProcessEvents
Diff between previous and next state:
checkpoint.errorState.propagating:
false
true
--- Transition of flow [03ab886e-3fd3-4667-b944-ab6a3b1f90a7] ---
Timestamp: 2018-06-01T17:38:52.606Z
Event: DoRemainingWork
Actions:
PropagateErrors(errorMessages=[ErrorSessionMessage(flowException=null, errorId=-8704604242619505379)], sessions=[], senderUUID=861f07d6-4b8f-42bd-9b52-5152812db2ba)
CreateTransaction
RemoveCheckpoint(id=[03ab886e-3fd3-4667-b944-ab6a3b1f90a7])
PersistDeduplicationFacts(deduplicationHandlers=[])
ReleaseSoftLocks(uuid=03ab886e-3fd3-4667-b944-ab6a3b1f90a7)
CommitTransaction
AcknowledgeMessages(deduplicationHandlers=[])
RemoveSessionBindings(sessionIds=[])
RemoveFlow(flowId=[03ab886e-3fd3-4667-b944-ab6a3b1f90a7], removalReason=ErrorFinish(flowErrors=[FlowError(errorId=-8704604242619505379, exception=java.lang.IllegalStateException: You shouldn't be calling me)]), lastState=StateMachineState(checkpoint=Checkpoint(invocationContext=InvocationContext(origin=RPC(actor=Actor(id=Id(value=aliceUser), serviceId=AuthServiceId(value=NODE_CONFIG), owningLegalIdentity=O=Alice Corp, L=Madrid, C=ES)), trace=Trace(invocationId=26bcf0c3-f1d8-4098-a52d-3780f4095b7a, timestamp: 2018-06-01T17:38:52.234Z, entityType: Invocation, sessionId=393d1175-3bb1-4eb1-bff0-6ba317851260, timestamp: 2018-06-01T17:38:52.169Z, entityType: Session), actor=Actor(id=Id(value=aliceUser), serviceId=AuthServiceId(value=NODE_CONFIG), owningLegalIdentity=O=Alice Corp, L=Madrid, C=ES), externalTrace=null, impersonatedActor=null), ourIdentity=O=Alice Corp, L=Madrid, C=ES, sessions={}, subFlowStack=[Inlined(flowClass=class net.corda.docs.tutorial.flowstatemachines.ExampleSummingFlow, subFlowVersion=CorDappFlow(platformVersion=1, corDappName=net.corda.docs-c6816652-f975-4fb2-aa09-ef1dddea19b3, corDappHash=F4012397D8CF97926B5998E046DBCE16D497318BB87DCED66313912D4B303BB7))], flowState=Started(flowIORequest=ExecuteAsyncOperation(operation=net.corda.docs.tutorial.flowstatemachines.SummingOperationThrowing@40f4c23d), frozenFiber=15EC69204562BB396846768169AD4A339569D97AE841D805C230C513A8BA5BDE), errorState=Errored(errors=[FlowError(errorId=-8704604242619505379, exception=java.lang.IllegalStateException: You shouldn't be calling me)], propagatedIndex=1, propagating=true), numberOfSuspends=1, deduplicationSeed=03ab886e-3fd3-4667-b944-ab6a3b1f90a7), flowLogic=net.corda.docs.tutorial.flowstatemachines.ExampleSummingFlow@600b0c6c, pendingDeduplicationHandlers=[], isFlowResumed=false, isTransactionTracked=false, isAnyCheckpointPersisted=true, isStartIdempotent=false, isRemoved=true, senderUUID=861f07d6-4b8f-42bd-9b52-5152812db2ba))
Continuation: Abort
Diff between previous and next state:
checkpoint.errorState.propagatedIndex:
0
1
isRemoved:
false
true
Whoa that’s a lot of stuff. Now we get a glimpse into the bowels of the flow state machine. As we can see the flow did quite a few things, even though the flow code looks simple.
What we can see here is the different transitions the flow’s state machine went through that led up to the error condition. For each transition we see what Event triggered the transition, what Action s were taken as a consequence, and how the internal State of the state machine was modified in the process. It also prints the transition’s Continuation, which indicates how the flow should proceed after the transition.
For example in the first transition we can see that the triggering event was a DoRemainingWork
, this is a generic
event that instructs the state machine to check its own state to see whether there’s any work left to do, and does it if
there’s any.
In this case the work involves persisting a checkpoint together with some deduplication data in a database transaction, then acknowledging any triggering messages, signalling that the flow has started, and creating a fresh database transaction, to be used by user code.
The continuation is a Resume
, which instructs the state machine to hand control to user code. The state change is
a simple update of bookkeeping data.
In other words the first transition concerns the initialization of the flow, which includes the creation of the checkpoint.
The next transition is the suspension of our summing operation, triggered by the Suspend
event. As we can see in
this transition we aren’t doing any work related to the summation yet, we’re merely persisting the checkpoint that
indicates that we want to do the summation. Had we added a toString
method to our SummingOperationThrowing
we
would see a nicer message.
The next transition is the faulty one, as we can see it was also triggered by a DoRemainingWork
, and executed our
operation. We can see that there are two state “diff”s printed, one that would’ve happened had the transition succeeded,
and one that actually happened, which marked the flow’s state as errored. The rest of the transitions involve error
propagation (triggered by the FlowHospital
) and notification of failure, which ultimately raises the exception on
the RPC resultFuture
.
Deterministic Corda Modules¶
A Corda contract’s verify function should always produce the same results for the same input data. To that end, Corda provides the following modules:
core-deterministic
serialization-deterministic
jdk8u-deterministic
These are reduced version of Corda’s core
and serialization
modules and the OpenJDK 8 rt.jar
, where the
non-deterministic functionality has been removed. The intention here is that all CorDapp classes required for
contract verification should be compiled against these modules to prevent them containing non-deterministic behaviour.
注解
These modules are only a development aid. They cannot guarantee determinism without also including
deterministic versions of all their dependent libraries, e.g. kotlin-stdlib
.
Generating the Deterministic Modules¶
- JDK 8
jdk8u-deterministic
is a “pseudo JDK” image that we can point the Java and Kotlin compilers to. It downloads thert.jar
containing a deterministic subset of the Java 8 APIs from the Artifactory.To build a new version of this JAR and upload it to the Artifactory, see the
create-jdk8u
module. This is a standalone Gradle project within the Corda repository that will clone thedeterministic-jvm8
branch of Corda’s OpenJDK repository and then build it. (This currently requires a C++ compiler, GNU Make and a UNIX-like development environment.)- Corda Modules
core-deterministic
andserialization-deterministic
are generated from Corda’score
andserialization
modules respectively using both ProGuard and Corda’sJarFilter
Gradle plugin. Corda developers configure these tools by applying Corda’s@KeepForDJVM
and@DeleteForDJVM
annotations to elements ofcore
andserialization
as described here.
The build generates each of Corda’s deterministic JARs in six steps:
Some very few classes in the original JAR must be replaced completely. This is typically because the original class uses something like
ThreadLocal
, which is not available in the deterministic Java APIs, and yet the class is still required by the deterministic JAR. We must keep such classes to a minimum!The patched JAR is analysed by ProGuard for the first time using the following rule:
keep '@interface net.corda.core.KeepForDJVM { *; }'ProGuard works by calculating how much code is reachable from given “entry points”, and in our case these entry points are the
@KeepForDJVM
classes. The unreachable classes are then discarded by ProGuard’sshrink
option.The remaining classes may still contain non-deterministic code. However, there is no way of writing a ProGuard rule explicitly to discard anything. Consider the following class:
@CordaSerializable @KeepForDJVM data class UniqueIdentifier @JvmOverloads @DeleteForDJVM constructor( val externalId: String? = null, val id: UUID = UUID.randomUUID() ) : Comparable<UniqueIdentifier> { ... }While CorDapps will definitely need to handle
UniqueIdentifier
objects, all of the secondary constructors generate a new randomUUID
and so are non-deterministic. Hence the next “determinising” step is to pass the classes to theJarFilter
tool, which strips out all of the elements which have been annotated as@DeleteForDJVM
and stubs out any functions annotated with@StubOutForDJVM
. (Stub functions that return a value will throwUnsupportedOperationException
, whereasvoid
orUnit
stubs will do nothing.)After the
@DeleteForDJVM
elements have been filtered out, the classes are rescanned using ProGuard to remove any more code that has now become unreachable.The remaining classes define our deterministic subset. However, the
@kotlin.Metadata
annotations on the compiled Kotlin classes still contain references to all of the functions and properties that ProGuard has deleted. Therefore we now use theJarFilter
to delete these references, as otherwise the Kotlin compiler will pretend that the deleted functions and properties are still present.Finally, we use ProGuard again to validate our JAR against the deterministic
rt.jar
:This step will fail if ProGuard spots any Java API references that still cannot be satisfied by the deterministic
rt.jar
, and hence it will break the build.
Configuring IntelliJ with a Deterministic SDK¶
We would like to configure IntelliJ so that it will highlight uses of non-deterministic Java APIs as not found.
Or, more specifically, we would like IntelliJ to use the deterministic-rt.jar
as a “Module SDK” for deterministic
modules rather than the rt.jar
from the default project SDK, to make IntelliJ consistent with Gradle.
This is possible, but slightly tricky to configure because IntelliJ will not recognise an SDK containing only the
deterministic-rt.jar
as being valid. It also requires that IntelliJ delegate all build tasks to Gradle, and that
Gradle be configured to use the Project’s SDK.
- Creating the Deterministic SDK
Gradle creates a suitable JDK image in the project’s
jdk8u-deterministic/jdk
directory, and you can configure IntelliJ to use this location for this SDK. However, you should also be aware that IntelliJ SDKs are available for all projects to use.To create this JDK image, execute the following:
$ gradlew jdk8u-deterministic:copyJdk
Now select
File/Project Structure/Platform Settings/SDKs
and add a new JDK SDK with thejdk8u-deterministic/jdk
directory as its home. Rename this SDK to something like “1.8 (Deterministic)”.This should be sufficient for IntelliJ. However, if IntelliJ realises that this SDK does not contain a full JDK then you will need to configure the new SDK by hand:
Create a JDK Home directory with the following contents:
jre/lib/rt.jar
where
rt.jar
here is this renamed artifact:<dependency> <groupId>net.corda</groupId> <artifactId>deterministic-rt</artifactId> <classifier>api</classifier> </dependency>
While IntelliJ is not running, locate the
config/options/jdk.table.xml
file in IntelliJ’s configuration directory. Add an empty<jdk>
section to this file:<jdk version="2"> <name value="1.8 (Deterministic)"/> <type value="JavaSDK"/> <version value="java version "1.8.0""/> <homePath value=".. path to the deterministic JDK directory .."/> <roots> </roots> </jdk>
Open IntelliJ and select
File/Project Structure/Platform Settings/SDKs
. The “1.8 (Deterministic)” SDK should now be present. Select it and then click on theClasspath
tab. Press the “Add” / “Plus” button to addrt.jar
to the SDK’s classpath. Then select theAnnotations
tab and include the same JAR(s) as the other SDKs.
- Configuring the Corda Project
Open the root
build.gradle
file and define this property:buildscript { ext { ... deterministic_idea_sdk = '1.8 (Deterministic)' ... } }
- Configuring IntelliJ
Go to
File/Settings/Build, Execution, Deployment/Build Tools/Gradle
, and configure Gradle’s JVM to be the project’s JVM.Go to
File/Settings/Build, Execution, Deployment/Build Tools/Gradle/Runner
, and select these options:- Delegate IDE build/run action to Gradle
- Run tests using the Gradle Test Runner
Delete all of the
out
directories that IntelliJ has previously generated for each module.Go to
View/Tool Windows/Gradle
and click theRefresh all Gradle projects
button.
These steps will enable IntelliJ’s presentation compiler to use the deterministic rt.jar
with the following modules:
core-deterministic
serialization-deterministic
core-deterministic:testing:common
but still build everything using Gradle with the full JDK.
Testing the Deterministic Modules¶
The core-deterministic:testing
module executes some basic JUnit tests for the core-deterministic
and
serialization-deterministic
JARs. These tests are compiled against the deterministic rt.jar
, although
they are still executed using the full JDK.
The testing
module also has two sub-modules:
core-deterministic:testing:data
- This module generates test data such as serialised transactions and elliptic curve key pairs using the full
non-deterministic
core
library and JDK. This data is all written into a single JAR which thetesting
module adds to its classpath. core-deterministic:testing:common
- This module provides the test classes which the
testing
anddata
modules need to share. It is therefore compiled against the deterministic API subset.
Applying @KeepForDJVM and @DeleteForDJVM annotations¶
Corda developers need to understand how to annotate classes in the core
and serialization
modules correctly
in order to maintain the deterministic JARs.
注解
Every Kotlin class still has its own .class
file, even when all of those classes share the same
source file. Also, annotating the file:
@file:KeepForDJVM
package net.corda.core.internal
does not automatically annotate any class declared within this file. It merely annotates any
accompanying Kotlin xxxKt
class.
For more information about how JarFilter
is processing the byte-code inside core
and serialization
,
use Gradle’s --info
or --debug
command-line options.
- Deterministic Classes
Classes that must be included in the deterministic JAR should be annotated as
@KeepForDJVM
.@Target(FILE, CLASS) @Retention(BINARY) @CordaInternal annotation class KeepForDJVM
To preserve any Kotlin functions, properties or type aliases that have been declared outside of a
class
, you should annotate the source file’spackage
declaration instead:@file:JvmName("InternalUtils") @file:KeepForDJVM package net.corda.core.internal infix fun Temporal.until(endExclusive: Temporal): Duration = Duration.between(this, endExclusive)
- Non-Deterministic Elements
Elements that must be deleted from classes in the deterministic JAR should be annotated as
@DeleteForDJVM
.@Target( FILE, CLASS, CONSTRUCTOR, FUNCTION, PROPERTY_GETTER, PROPERTY_SETTER, PROPERTY, FIELD, TYPEALIAS ) @Retention(BINARY) @CordaInternal annotation class DeleteForDJVM
You must also ensure that a deterministic class’s primary constructor does not reference any classes that are not available in the deterministic
rt.jar
. The biggest risk here would be thatJarFilter
would delete the primary constructor and that the class could no longer be instantiated, althoughJarFilter
will print a warning in this case. However, it is also likely that the “determinised” class would have a different serialisation signature than its non-deterministic version and so become unserialisable on the deterministic JVM.Primary constructors that have non-deterministic default parameter values must still be annotated as
@DeleteForDJVM
because they cannot be refactored without breaking Corda’s binary interface. The Kotlin compiler will automatically apply this@DeleteForDJVM
annotation - along with any others - to all of the class’s secondary constructors too. TheJarFilter
plugin can then remove the@DeleteForDJVM
annotation from the primary constructor so that it can subsequently delete only the secondary constructors.The annotations that
JarFilter
will “sanitise” from primary constructors in this way are listed in the plugin’s configuration block, e.g.task jarFilter(type: JarFilterTask) { ... annotations { ... forSanitise = [ "net.corda.core.DeleteForDJVM" ] } }
Be aware that package-scoped Kotlin properties are all initialised within a common
<clinit>
block inside their host.class
file. This means that whenJarFilter
deletes these properties, it cannot also remove their initialisation code. For example:package net.corda.core @DeleteForDJVM val map: MutableMap<String, String> = ConcurrentHashMap()
In this case,
JarFilter
would delete themap
property but the<clinit>
block would still create an instance ofConcurrentHashMap
. The solution here is to refactor the property into its own file and then annotate the file itself as@DeleteForDJVM
instead.- Non-Deterministic Function Stubs
Sometimes it is impossible to delete a function entirely. Or a function may have some non-deterministic code embedded inside it that cannot be removed. For these rare cases, there is the
@StubOutForDJVM
annotation:@Target( CONSTRUCTOR, FUNCTION, PROPERTY_GETTER, PROPERTY_SETTER ) @Retention(BINARY) @CordaInternal annotation class StubOutForDJVM
This annotation instructs
JarFilter
to replace the function’s body with either an empty body (for functions that returnvoid
orUnit
) or one that throwsUnsupportedOperationException
. For example:fun necessaryCode() { nonDeterministicOperations() otherOperations() } @StubOutForDJVM private fun nonDeterministicOperations() { // etc }
Design Docs¶
Changelog¶
Here’s a summary of what’s changed in each Corda release. For guidance on how to upgrade code from the previous release, see Upgrading apps to Corda 4.
Version 4.0¶
Fixed race condition between
NodeVaultService.trackBy
andNodeVaultService.notifyAll
, where there could be states that were not reflected in the data feed returned fromtrackBy
(either in the query’s result snapshot or the observable).TimedFlows (only used by the notary client flow) will never give up trying to reach the notary, as this would leave the states in the notarisation request in an undefined state (unknown whether the spend has been notarised, i.e. has happened, or not). Also, retries have been disabled for single node notaries since in this case they offer no potential benefits, unlike for a notary cluster with several members who might have different availability.
New configuration property
database.initialiseAppSchema
with valuesUPDATE
,VALIDATE
andNONE
. The property controls the behavior of the Hibernate DDL generation.UPDATE
performs an update of CorDapp schemas, whileVALIDATE
only verifies their integrity. The property does not affect the node-specific DDL handling and complementsdatabase.initialiseSchema
to disable DDL handling altogether.JacksonSupport.createInMemoryMapper
was incorrectly marked as deprecated and is no longer so.Standardised CorDapp version identifiers in jar manifests (aligned with associated cordapp Gradle plugin changes). Updated all samples to reflect new conventions.
Introduction of unique CorDapp version identifiers in jar manifests for contract and flows/services CorDapps. Updated all sample CorDapps to reflect new conventions. See CorDapp separation for further information.
Automatic Constraints propagation for hash-constrained states to signature-constrained states. This allows Corda 4 signed CorDapps using signature constraints to consume existing hash constrained states generated by unsigned CorDapps in previous versions of Corda.
You can now load different CorDapps for different nodes in the node-driver and mock-network. This previously wasn’t possible with the
DriverParameters.extraCordappPackagesToScan
andMockNetwork.cordappPackages
parameters as all the nodes would get the same CorDapps. SeeTestCordapp
,NodeParameters.additionalCordapps
andMockNodeParameters.additionalCordapps
.DriverParameters.extraCordappPackagesToScan
andMockNetwork.cordappPackages
have been deprecated as they do not support the new CorDapp versioning and MANIFEST metadata support that has been added. They create artificial CorDapp jars which do not preserve these settings and thus may produce incorrect results when testing. It is recommendedDriverParameters.cordappsForAllNodes
andMockNetworkParameters.cordappsForAllNodes
be used instead.Fixed a problem with IRS demo not being able to simulate future dates as expected (https://github.com/corda/corda/issues/3851).
Fixed a problem that was preventing
Cash.generateSpend
to be used more than once per transaction (https://github.com/corda/corda/issues/4110).Fixed a bug resulting in poor vault query performance and incorrect results when sorting.
Improved exception thrown by
AttachmentsClassLoader
when an attachment cannot be used because its uploader is not trusted.Fixed deadlocks generated by starting flow from within CordaServices.
Marked the
Attachment
interface as@DoNotImplement
because it is not meant to be extended by CorDapp developers. If you have already done so, please get in contact on the usual communication channels.Added auto-acceptance of network parameters for network updates. This behaviour is available for a subset of the network parameters and is configurable via the node config. See 网络地图 for more information.
Deprecated
SerializationContext.withAttachmentsClassLoader
. This functionality has always been disabled by flags and there is no reason for a CorDapp developer to use it. It is just an internal implementation detail of Corda.Deprecated all means to directly create a
LedgerTransaction
instance, as client code is only meant to get hold of aLedgerTransaction
viaWireTransaction.toLedgerTransaction
.Introduced new optional network bootstrapper command line options (–register-package-owner, –unregister-package-owner) to register/unregister a java package namespace with an associated owner in the network parameter packageOwnership whitelist.
BFT-Smart and Raft notary implementations have been moved to the
net.corda.notary.experimental
package to emphasise their experimental nature. Note that it is not possible to preserve the state for both types of notaries when upgrading from V3 or an earlier Corda version.New “validate-configuration” sub-command to
corda.jar
, allowing to validate the actual node configuration without starting the node.CorDapps now have the ability to specify a minimum platform version in their MANIFEST.MF to prevent old nodes from loading them.
CorDapps have the ability to specify a target platform version in their MANIFEST.MF as a means of indicating to the node the app was designed and tested on that version.
Nodes will no longer automatically reject flow initiation requests for flows they don’t know about. Instead the request will remain un-acknowledged in the message broker. This enables the recovery scenerio whereby any missing CorDapp can be installed and retried on node restart. As a consequence the initiating flow will be blocked until the receiving node has resolved the issue.
FinalityFlow
is now an inlined flow and requiresFlowSession
s to each party intended to receive the transaction. This is to fix the security problem with the old API that required every node to accept any transaction it received without any checks. Existing CorDapp binaries relying on this old behaviour will continue to function as previously. However, it is strongly recommended CorDapps switch to this new API. See Upgrading apps to Corda 4 for further details.For similar reasons,
SwapIdentitiesFlow
, from confidential-identities, is also now an inlined flow. The old API has been preserved but it is strongly recommended CorDapps switch to this new API. See Upgrading apps to Corda 4 for further details.Introduced new optional network bootstrapper command line option (–minimum-platform-version) to set as a network parameter
Vault storage of contract state constraints metadata and associated vault query functions to retrieve and sort by constraint type.
New overload for
CordaRPCClient.start()
method allowing to specify target legal identity to use for RPC call.Case insensitive vault queries can be specified via a boolean on applicable SQL criteria builder operators. By default queries will be case sensitive.
Getter added to
CordaRPCOps
for the node’s network parameters.The RPC client library now checks at startup whether the server is of the client libraries major version or higher. Therefore to connect to a Corda 4 node you must use version 4 or lower of the library. This behaviour can be overridden by specifying a lower number in the
CordaRPCClientConfiguration
class.Removed experimental feature
CordformDefinition
Added new overload of
StartedMockNode.registerInitiatedFlow
which allows registering custom initiating-responder flow pairs, which can be useful for testing error cases.“app”, “rpc”, “p2p” and “unknown” are no longer allowed as uploader values when importing attachments. These are used internally in security sensitive code.
Change type of the
checkpoint_value
column. Please check the upgrade-notes on how to update your database.Removed buggy :serverNameTablePrefix: configuration.
freeLocalHostAndPort
,freePort
, andgetFreeLocalPorts
fromTestUtils
have been deprecated as they don’t provide any guarantee the returned port will be available which can result in flaky tests. UsePortAllocation.Incremental
instead.Docs for IdentityService. assertOwnership updated to correctly state that an UnknownAnonymousPartyException is thrown rather than IllegalStateException.
The Corda JPA entities no longer implement java.io.Serializable, as this was causing persistence errors in obscure cases. Java serialization is disabled globally in the node, but in the unlikely event you were relying on these types being Java serializable please contact us.
Remove all references to the out-of-process transaction verification.
The class carpenter has a “lenient” mode where it will, during deserialisation, happily synthesis classes that implement interfaces that will have unimplemented methods. This is useful, for example, for object viewers. This can be turned on with
SerializationContext.withLenientCarpenter
.Added a
FlowMonitor
to log information about flows that have been waiting for IO more than a configurable threshold.H2 database changes: * The node’s H2 database now listens on
localhost
by default. * The database server address must also be enabled in the node configuration. * A newh2Settings
configuration block supersedes theh2Port
option.Improved documentation PDF quality. Building the documentation now requires
LaTex
to be installed on the OS.Add
devModeOptions.allowCompatibilityZone
to re-enable the use of a compatibility zone anddevMode
Fixed an issue where
trackBy
was returningContractStates
from a transaction that were not being tracked. The unrelatedContractStates
will now be filtered out from the returnedVault.Update
.Introducing the flow hospital - a component of the node that manages flows that have errored and whether they should be retried from their previous checkpoints or have their errors propagate. Currently it will respond to any error that occurs during the resolution of a received transaction as part of
FinalityFlow
. In such a scenario the receiving flow will be parked and retried on node restart. This is to allow the node operator to rectify the situation as otherwise the node will have an incomplete view of the ledger.Fixed an issue preventing out of process nodes started by the
Driver
from logging to file.Fixed an issue with
CashException
not being able to deserialize after the introduction of AMQP for RPC.Removed -Xmx VM argument from Explorer’s Capsule setup. This helps avoiding out of memory errors.
New
killFlow
RPC for killing stuck flows.Shell now kills an ongoing flow when CTRL+C is pressed in the terminal.
Add check at startup that all persisted Checkpoints are compatible with the current version of the code.
ServiceHub
andCordaRPCOps
can now safely be used from multiple threads without incurring in database transaction problems.Doorman and NetworkMap url’s can now be configured individually rather than being assumed to be the same server. Current
compatibilityZoneURL
configurations remain valid. See both 节点的配置 and Network certificates for details.Improved audit trail for
FinalityFlow
and related sub-flows.Notary client flow retry logic was improved to handle validating flows better. Instead of re-sending flow messages the entire flow is now restarted after a timeout. The relevant node configuration section was renamed from
p2pMessagingRetry
, toflowTimeout
to reflect the behaviour change.The node’s configuration is only printed on startup if
devMode
istrue
, avoiding the risk of printing passwords in a production setup.NodeStartup
will now only print node’s configuration ifdevMode
istrue
, avoiding the risk of printing passwords in a production setup.SLF4J’s MDC will now only be printed to the console if not empty. No more log lines ending with “{}”.
WireTransaction.Companion.createComponentGroups
has been marked as@CordaInternal
. It was never intended to be public and was already internal for Kotlin code.RPC server will now mask internal errors to RPC clients if not in devMode.
Throwable``s implementing ``ClientRelevantError
will continue to be propagated to clients.RPC Framework moved from Kryo to the Corda AMQP implementation [Corda-847]. This completes the removal of
Kryo
from general use within Corda, remaining only for use in flow checkpointing.Set co.paralleluniverse.fibers.verifyInstrumentation=true in devMode.
Node will now gracefully fail to start if one of the required ports is already in use.
Node will now gracefully fail to start if
devMode
is true andcompatibilityZoneURL
is specified.Added smart detection logic for the development mode setting and an option to override it from the command line.
Changes to the JSON/YAML serialisation format from
JacksonSupport
, which also applies to the node shell:WireTransaction
now nicely outputs into its components:id
,notary
,inputs
,attachments
,outputs
,commands
,timeWindow
andprivacySalt
. This can be deserialized back.SignedTransaction
is serialised intowire
(i.e. currently onlyWireTransaction
tested) andsignatures
, and can be deserialized back.
The Vault Criteria API has been extended to take a more precise specification of which class contains a field. This primarily impacts Java users; Kotlin users need take no action. The old methods have been deprecated but still work - the new methods avoid bugs that can occur when JPA schemas inherit from each other.
Due to ongoing work the experimental interfaces for defining custom notary services have been moved to the internal package. CorDapps implementing custom notary services will need to be updated, see
samples/notary-demo
for an example. Further changes may be required in the future.Configuration file changes:
- Added program line argument
on-unknown-config-keys
to allow specifying behaviour on unknown node configuration property keys. Values are: [FAIL, IGNORE], default to FAIL if unspecified. - Introduced a placeholder for custom properties within
node.conf
; the property key is “custom”. - The deprecated web server now has its own
web-server.conf
file, separate fromnode.conf
. - Property keys with double quotes (e.g. “key”) in
node.conf
are no longer allowed, for rationale refer to 节点的配置. - The
issuableCurrencies
property is no longer valid fornode.conf
. Instead, it has been moved to the finance workflows CorDapp configuration.
- Added program line argument
Added public support for creating
CordaRPCClient
using SSL. For this to work the node needs to provide client applications a certificate to be added to a truststore. See Using the client RPC APIThe node RPC broker opens 2 endpoints that are configured with
address
andadminAddress
. RPC Clients would connect to the address, while the node will connect to the adminAddress. Previously if ssl was enabled for RPC theadminAddress
was equal toaddress
.Upgraded H2 to v1.4.197
Shell (embedded available only in dev mode or via SSH) connects to the node via RPC instead of using the
CordaRPCOps
object directly. To enable RPC connectivity ensure node’srpcSettings.address
andrpcSettings.adminAddress
settings are present.Changes to the network bootstrapper:
- The whitelist.txt file is no longer needed. The existing network parameters file is used to update the current contracts whitelist.
- The CorDapp jars are also copied to each nodes’
cordapps
directory.
Errors thrown by a Corda node will now reported to a calling RPC client with attention to serialization and obfuscation of internal data.
Serializing an inner class (non-static nested class in Java, inner class in Kotlin) will be rejected explicitly by the serialization framework. Prior to this change it didn’t work, but the error thrown was opaque (complaining about too few arguments to a constructor). Whilst this was possible in the older Kryo implementation (Kryo passing null as the synthesised reference to the outer class) as per the Java documentation here we are disallowing this as the paradigm in general makes little sense for contract states.
Node can be shut down abruptly by
shutdown
function inCordaRPCOps
or gracefully (draining flows first) throughgracefulShutdown
command from shell.API change:
net.corda.core.schemas.PersistentStateRef
fields (index and txId) are now non-nullable. The fields were always effectively non-nullable - values were set from non-nullable fields of other objects. The class is used as database Primary Key columns of other entities and databases already impose those columns as non-nullable (even if JPA annotation nullable=false was absent). In case your Cordapps use this entity class to persist data in own custom tables as non Primary Key columns refer to Upgrading apps to Corda 4 for upgrade instructions.Adding a public method to check if a public key satisfies Corda recommended algorithm specs,
Crypto.validatePublicKey(java.security.PublicKey)
. For instance, this method will check if an ECC key lies on a valid curve or if an RSA key is >= 2048bits. This might be required for extra key validation checks, e.g., for Doorman to check that a CSR key meets the minimum security requirements.Table name with a typo changed from
NODE_ATTCHMENTS_CONTRACTS
toNODE_ATTACHMENTS_CONTRACTS
.Node logs a warning for any
MappedSchema
containing a JPA entity referencing another JPA entity from a differentMappedSchema
. The log entry starts with “Cross-reference between MappedSchemas”. API: Persistence documentation no longer suggests mapping between different schemas.Upgraded Artemis to v2.6.2.
Introduced the concept of “reference input states”. A reference input state is a
ContractState
which can be referred to in a transaction by the contracts of input and output states but whose contract is not executed as part of the transaction verification process and is not consumed when the transaction is committed to the ledger but is checked for “current-ness”. In other words, the contract logic isn’t run for the referencing transaction only. It’s still a normal state when it occurs in an input or output position. This feature is only available on Corda networks running with a minimum platform version of 4.A new wrapper class over
StateRef
is introduced, calledReferenceStateRef
. Although “reference input states” are stored asStateRef
objects inWireTransaction
, we needed a way to distinguish between “input states” and “reference input states” when required to filter by object type. Thus, when one wants to filter-in all “reference input states” in aFilteredTransaction
then he/she should check if it is of typeReferenceStateRef
.Removed type parameter
U
fromtryLockFungibleStatesForSpending
to allow the function to be used withFungibleState
as well asFungibleAsset
. This _might_ cause a compile failure in some obscure cases due to the removal of the type parameter from the method. If your CorDapp does specify types explicitly when using this method then updating the types will allow your app to compile successfully. However, those using type inference (e.g. using Kotlin) should not experience any changes. Old CorDapp JARs will still work regardless.issuer_ref
column inFungibleStateSchema
was updated to be nullable to support the introduction of theFungibleState
interface. Thevault_fungible_states
table can hold bothFungibleAssets
andFungibleStates
.CorDapps built by
corda-gradle-plugins
are now signed and sealed JAR files. Signing can be configured or disabled, and it defaults to using the Corda development certificate.Finance CorDapps are now built as sealed and signed JAR files. Custom classes can no longer be placed in the packages defined in either finance Cordapp or access it’s non-public members.
Finance CorDapp was split into two separate apps:
corda-finance-contracts
andcorda-finance-workflows
. There is no longer a single cordapp which provides both. You need to have both JARs installed in the node simultaneously for the app to work however.All sample CorDapps were split into separate apps: workflows and contracts to reflect new convention. It is recommended to structure your CorDapps this way, see Upgrading apps to Corda 4 on upgrading your CorDapp.
The format of the shell commands’ output can now be customized via the node shell, using the
output-format
command.The
node_transaction_mapping
database table has been folded into thenode_transactions
database table as an additional column.Logging for P2P and RPC has been separated, to make it easier to enable all P2P or RPC logging without hand-picking loggers for individual classes.
Vault Query Criteria have been enhanced to allow filtering by state relevancy. Queries can request all states, just relevant ones, or just non relevant ones. The default is to return all states, to maintain backwards compatibility. Note that this means apps running on nodes using Observer node functionality should update their queries to request only relevant states if they are only expecting to see states in which they participate.
Postgres dependency was updated to version 42.2.5
Test
CordaService
s can be installed on mock nodes usingUnstartedMockNode.installCordaService
.The finance-contracts demo CorDapp has been slimmed down to contain only that which is relevant for contract verification. Everything else has been moved to the finance-workflows CorDapp:
- The cash selection logic.
AbstractCashSelection
is now in net.corda.finance.contracts.asset so any custom implementations must now be defined inMETA-INF/services/net.corda.finance.workflows.asset.selection.AbstractCashSelection
. - The jackson annotations on
Expression
have been removed. You will need to useFinanceJSONSupport.registerFinanceJSONMappers
if you wish to preserve the JSON format for this class. - The various utility methods defined in
Cash
for creating cash transactions have been moved tonet.corda.finance.workflows.asset.CashUtils
. Similarly withCommercialPaperUtils
andObligationUtils
. - Various other utilities such as
GetBalances
and the test calendar data.
The only exception to this is
Interpolator
and related classes. These are now in the IRS demo workflows CorDapp.- The cash selection logic.
Vault states are migrated when moving from V3 to V4: the relevancy column is correctly filled, and the state party table is populated. Note: This means Corda can be slow to start up for the first time after upgrading from V3 to V4.
End of changelog.