Cookbook
The cookbook hosts small code snippets in a question-answer format. It does not walk you through the whole setup, like the basic examples, but rather aims to answer some questions around specific uses. It also does not replace the FAQ, rather this is focussed explicitly on code samples, instead of general trouble-shooting for common mishaps.
These questions are added as they do come up and the hope is that the snippets here is will be useful to others, instead of just the original person who asked on a specific use-case. As of today, it is only making a start and not fully comprehensive yet - check back early, check back often.
The building blocks for each blockchain. As such there are a number of examples for working with blocks and headers, that could be useful.
A block hash refers to the hash over the header, the extrinsic hash refers to the hash of the encoded extrinsic. Since all objects returned by the API implements the
.hash => Hash
getter, we can simply use this to view the actual hash.const { mnemonicGenerate,cryptoWaitReady } = require('@polkadot/util-crypto');
const { Keyring } = require('@polkadot/keyring');
const { ApiPromise,WsProvider } = require('@polkadot/api');
async function generateAddress () {
const provider = new WsProvider('wss://rpc.phuquoc.dog');
const api = await ApiPromise.create({provider});
// returns Hash
const blockNumber = 1853
const blockHash = await api.rpc.chain.getBlockHash(blockNumber);
const signedBlock = await api.rpc.chain.getBlock(blockHash);
console.log('blockHash: ' + blockHash);
console.log('signedBlock: ' + signedBlock);
const [
{ block },
blockEvents,
blockHeader,
totalIssuance,
runtimeVersion,
activeEra,
currentIndex,
chainElectionStatus,
timestampMs,
] = await Promise.all([
api.rpc.chain.getBlock(blockHash),
api.query.system.events.at(blockHash),
api.derive.chain.getHeader(blockHash),
api.query.balances.totalIssuance.at(blockHash),
api.rpc.state.getRuntimeVersion(blockHash),
api.query.staking.activeEra.at(blockHash)
.then((res) => (res.toJSON() ? res.toJSON().index : 0)),
api.query.session.currentIndex.at(blockHash)
.then((res) => (res || 0)),
api.query.electionProviderMultiPhase.currentPhase.at(blockHash),
api.query.timestamp.now.at(blockHash),
]);
console.log('blockEvents: ' + blockEvents);
console.log('blockHeader: ' + blockHeader);
console.log('totalIssuance: ' + totalIssuance);
console.log('runtimeVersion: ' + runtimeVersion);
const blockAuthor = blockHeader.author;
const blockAuthorIdentity = await api.derive.accounts.info(blockHeader.author);
console.log('blockAuthor: ' + blockAuthor);
console.log('blockAuthorIdentity: ', blockAuthorIdentity);
}
generateAddress().catch(console.error).finally(() => console.log('------Finish Demo getBalance ----'));
The block author is encoded inside the consensus logs for the block. To extract, you need to decode the log (which the API does do) and then map the index of the validator to the list of session validators. This extraction is however available on the api derive for new head subscriptions, which returns an extended header with the author populated (assuming that the digest logs are known).
// Some code
// subscribe to all new headers (with extended info)
api.derive.chain.subscribeNewHeads((header) => {
console.log(`#${header.number}: ${header.author}`);
});
The transactions are included in a signed block as part of the extrinsics - some of these will be unsigned and generated by the block author and some of these may be submitted from external sources and be signed. (Some palettes do use unsigned transactions, so signed/unsigned is not an indication of origin). To retrieve the block and display the transaction information, we can do the following -
// no blockHash is specified, so we retrieve the latestconst signedBlock = await api.rpc.chain.getBlock();// the information for each of the contained extrinsicssignedBlock.block.extrinsics.forEach((ex, index) => { // the extrinsics are decoded by the API, human-like view console.log(index, ex.toHuman()); const { isSigned, meta, method: { args, method, section } } = ex; // explicit display of name, args & documentation console.log(`${section}.${method}(${args.map((a) => a.toString()).join(', ')})`); console.log(meta.documentation.map((d) => d.toString()).join('\n')); // signer/nonce info if (isSigned) { console.log(`signer=${ex.signer.toString()}, nonce=${ex.nonce.toString()}`); }});
In the above
.toHuman()
is used to format into a human-readable representation. You can inspect/extract specific fields from the decoded extrinsic as required, for instance ex.method.section
would return the pallete that executed this transaction.While the blocks contain the extrinsics, the system event storage will contain the events and the details needed to allow for a mapping between. For events the
phase
is an enum that would be isApplyExtrinsic
with the index in the cases where it refers to an extrinsic in a block. This index maps through the order of the extrinsics as found.To perform a mapping between the two, we need information from both sources.
// no blockHash is specified, so we retrieve the latestconst signedBlock = await api.rpc.chain.getBlock();const allRecords = await api.query.system.events.at(signedBlock.block.header.hash);// map between the extrinsics and eventssignedBlock.block.extrinsics.forEach(({ method: { method, section } }, index) => { // filter the specific events based on the phase and then the // index of our extrinsic in the block const events = allRecords .filter(({ phase }) => phase.isApplyExtrinsic && phase.asApplyExtrinsic.eq(index) ) .map(({ event }) => `${event.section}.${event.method}`); console.log(`${section}.${method}:: ${events.join(', ') || 'no events'}`);});
This is an extension of the above example where extrinsics are mapped to their blocks. However in this example, we will look for specific extrinsic events, in this case the
system.ExtrinsicSuccess
and system.ExtrinsicFailed
events. The same logic can be applied to inspect any other type of expected event.// no blockHash is specified, so we retrieve the latestconst signedBlock = await api.rpc.chain.getBlock();const allRecords = await api.query.system.events.at(signedBlock.block.header.hash);// map between the extrinsics and eventssignedBlock.block.extrinsics.forEach(({ method: { method, section } }, index) => { allRecords // filter the specific events based on the phase and then the // index of our extrinsic in the block .filter(({ phase }) => phase.isApplyExtrinsic && phase.asApplyExtrinsic.eq(index) ) // test the events against the specific types we are looking for .forEach(({ event }) => { if (api.events.system.ExtrinsicSuccess.is(event)) { // extract the data for this event // (In TS, because of the guard above, these will be typed) const [dispatchInfo] = event.data; console.log(`${section}.${method}:: ExtrinsicSuccess:: ${dispatchInfo.toHuman()}`); } else if (api.events.system.ExtrinsicFailed.is(event)) { // extract the data for this event const [dispatchError, dispatchInfo] = event.data; let errorInfo; // decode the error if (dispatchError.isModule) { // for module errors, we have the section indexed, lookup // (For specific known errors, we can also do a check against the // api.errors.<module>.<ErrorName>.is(dispatchError.asModule) guard) const decoded = api.registry.findMetaError(dispatchError.asModule); errorInfo = `${decoded.section}.${decoded.name}`; } else { // Other, CannotLookup, BadOrigin, no extra info errorInfo = dispatchError.toString(); } console.log(`${section}.${method}:: ExtrinsicFailed:: ${errorInfo}`); } });});
Here you will find snippets for working with storage.
In the metadata, for each storage item a fallback is provided. This means that when an entry does not exist, the fallback (which is the default value for the type) will be provided. This means, that querying for a non-existent key (unless an option), will yield a value -
// retrieve Option<StakingLedger>const ledger = await api.query.staking.ledger('EoukLS2Rzh6dZvMQSkqFy4zGvqeo14ron28Ue3yopVc8e3Q');// retrieve ValidatorPrefs (will yield the default value)const prefs = await api.query.staking.validators('EoukLS2Rzh6dZvMQSkqFy4zGvqeo14ron28Ue3yopVc8e3Q');console.log(ledger.isNone, ledger.isSome); // true, falseconsole.log(JSON.stringify(prefs.toHuman())); // {"commission":"0"}
In the second case, the non-existent prefs returns the default/fallback value for the storage item. So in this case we don't know if the value is set to 0 or unset. Existence can be checked by using the storage size, which would be zero if nothing is stored.
// existsconst sizeY = await api.query.staking.validators.size('DB2mp5nNhbFN86J9hxoAog8JALMhDXgwvWMxrRMLNUFMEY4');// non existentconst sizeN = await api.query.staking.validators.size('EoukLS2Rzh6dZvMQSkqFy4zGvqeo14ron28Ue3yopVc8e3Q');console.log(sizeY.isZero(), sizeY.toNumber()); // false 4console.log(sizeN.isZero(), sizeY.toNumber()); // true 0
As explained elsewhere each map-type storage entry exposes the entries/keys helpers to retrieve the whole list. In the case of double maps, with the addition of a single argument, you can retrieve either all entries or a subset based on the first map key.
In both these cases, entries/keys operate the same way,
.entries()
retrieving (StorageKey, Codec)[]
and .keys()
retrieving StorageKey[]
// Retrieves the entries for all slashes, in all eras (no arg)const allEntries = await api.query.staking.nominatorSlashInEra.entries();// nominatorSlashInEra(EraIndex, AccountId) for the types of the key argsallEntries.forEach(([{ args: [era, nominatorId] }, value]) => { console.log(`${era}: ${nominatorId} slashed ${value.toHuman()}`);});
While we can retrieve only the keys for a specific era, using a argument for the first part of the doublemap (as defined here, an
EraIndex
) -// Retrieves the keys for the slashed validators in era 652const slashedKeys = await api.query.staking.nominatorSlashInEra.keys(652);// key args still contains [EraIndex, AccountId] decodedconsole.log(`slashed: ${slashedKeys.map(({ args: [era, nominatorId] }) => nominatorId)`);
A blockchain is no fun if you are not submitting transactions. Or at least if somebody is not submitting any. Here you will find some snippets for dealing with some common issues.
In addition to the
signAndSend
helper on transactions, .paymentInfo
(with the exact same parameters) are also exposed. Using the same sender, it applies a dummy signature to the transaction and then gets the fee estimation via RPC.// estimate the fees as RuntimeDispatchInfo, using the signer (either// address or locked/unlocked keypair) (When overrides are applied, e.g// nonce, the format would be `paymentInfo(sender, { nonce })`)const info = await api.tx.balances .transfer(recipient, 123) .paymentInfo(sender);// log relevant info, partialFee is Balance, estimated for currentconsole.log(` class=${info.class.toString()}, weight=${info.weight.toString()}, partialFee=${info.partialFee.toHuman()}`);
Assuming you are sending a tx via
.signAndSend
, the callback yields information around the tx pool status as well as any events when isInBlock
or isFinalized
. If an extrinsic fails via system.ExtrinsicFailed
event, you can retrieve the error, if defined as an enum on a module.api.tx.balances .transfer(recipient, 123) .signAndSend(sender, ({ status, events }) => { if (status.isInBlock || status.isFinalized) { events // find/filter for failed events .filter(({ event }) => api.events.system.ExtrinsicFailed.is(event) ) // we know that data for system.ExtrinsicFailed is // (DispatchError, DispatchInfo) .forEach(({ event: { data: [error, info] } }) => { if (error.isModule) { // for module errors, we have the section indexed, lookup const decoded = api.registry.findMetaError(error.asModule); const { docs, method, section } = decoded; console.log(`${section}.${method}: ${docs.join(' ')}`); } else { // Other, CannotLookup, BadOrigin, no extra info console.log(error.toString()); } }); } });
As of the
@polkadot/api
2.3.1 additional result fields are exposed. Firstly there is dispatchInfo: DispatchInfo
which occurs in both ExtrinsicSuccess
& ExtrinsicFailed
events. Additionally, on failures the dispatchError: DispatchError
is exposed. With this in mind, the above can be simplified to be -api.tx.balances .transfer(recipient, 123) .signAndSend(sender, ({ status, events, dispatchError }) => { // status would still be set, but in the case of error we can shortcut // to just check it (so an error would indicate InBlock or Finalized) if (dispatchError) { if (dispatchError.isModule) { // for module errors, we have the section indexed, lookup const decoded = api.registry.findMetaError(dispatchError.asModule); const { docs, name, section } = decoded; console.log(`${section}.${name}: ${docs.join(' ')}`); } else { // Other, CannotLookup, BadOrigin, no extra info console.log(dispatchError.toString()); } } });
The section above shows you how to listen for the result of a regular extrinsic. However, Sudo extrinsics do not directly report the success or failure of the underlying call. Instead, a Sudo transaction will return
Sudid(result)
, where result
will be the information you are looking for.To properly parse this information, we will follow the steps above, but then specifically peek into the event data to find the final result:
const unsub = await api.tx.sudo .sudo( api.tx.balances.forceTransfer(user1, user2, amount) ) .signAndSend(sudoPair, ({ status, events }) => { if (status.isInBlock || status.isFinalized) { events // We know this tx should result in `Sudid` event. .filter(({ event }) => api.events.sudo.Sudid.is(event) ) // We know that `Sudid` returns just a `Result` .forEach(({ event : { data: [result] } }) => { // Now we look to see if the extrinsic was actually successful or not... if (result.isError) { let error = result.asError; if (error.isModule) { // for module errors, we have the section indexed, lookup const decoded = api.registry.findMetaError(error.asModule); const { docs, name, section } = decoded; console.log(`${section}.${name}: ${docs.join(' ')}`); } else { // Other, CannotLookup, BadOrigin, no extra info console.log(error.toString()); } } }); unsub(); } });
For most runtime modules, transactions need to be signed and validation for this happens node-side. There are however modules that accepts unsigned extrinsics, an example would be the Polkadot/Kusama token claims (which is here used as an example).
// construct the transaction, exactly as per normalconst utx = api.tx.claims.claim(beneficiary, ethSignature);// send it without calling sign, pass callback with status/eventstx.send(({ status }) => { if (status.isInBlock) { console.log(`included in ${status.asInBlock}`); }});
The signing is indicated by the first byte in the transaction, so in this case we have called
.send
on it (no .sign
or .signAndSend
), so it will be sent using the unsigned state, without signature attached.Polkadot/Substrate provides a
utility.batch
method that can be used to send a number of transactions at once. These are then executed from a single sender (single nonce specified) in sequence. This is very useful in a number of cases, for instance if you wish to create a payout for a validator for multiple eras, you can use this method. Likewise, you can send a number of transfers at once. Or even batch different types of transactions.// construct a list of transactions we want to batchconst txs = [ api.tx.balances.transfer(addrBob, 12345), api.tx.balances.transfer(addrEve, 12345), api.tx.staking.unbond(12345)];// construct the batch and send the transactionsapi.tx.utility .batch(txs) .signAndSend(sender, ({ status }) => { if (status.isInBlock) { console.log(`included in ${status.asInBlock}`); } });
The fee for a batch transaction can be estimated similar to the fee for a single transaction using the exposed
.paymentInfo
helper method that was described earlier, and it is usually less than the sum of the fees for each individual transaction.The
system.account
query will always contain the current state, i.e. it will reflect the nonce for the last known block. As such when sending multiple transactions in quick succession (see batching above), there may be transactions in the pool that has the same nonce that signAndSend
would apply - this call doesn't do any magic, it simply reads the state for the nonce. Since we can specify options to the signAndSend
operation, we can override the nonce, either by manually incrementing it or querying it via rpc.system.accountNextIndex
.for (let i = 0; i < 10; i++) { // retrieve sender's next index/nonce, taking txs in the pool into account const nonce = await api.rpc.system.accountNextIndex(sender); // send, just retrieving the hash, not waiting on status const txhash = await api.tx.balances .transfer(recipient, 123) .signAndSend(sender, { nonce });}
As a convenience function, the
accountNextIndex
can be omitted by specifying a nonce of -1
, allow the API to do the lookup. In this case the above can be simplified even further,for (let i = 0; i < 10; i++) { const txhash = await api.tx.balances .transfer(recipient, 123) .signAndSend(sender, { nonce: -1 });}
The latter form is preferred since it dispatches the RPC calls for nonce and blockHash (used for mortality) in parallel and therefore will yield a better throughput, especially with the above bulk example.
Last modified 1yr ago