Orakl Network VRF
Description
The Orakl Network VRF is one of the main Orakl Network solutions. It provides an access to provably random number generator.
The code is located under core
directory, and separated to three independent microservices: listener, worker and reporter.
State Setup
The Orakl Network VRF requires an access to state of listeners and VRF keys.
Listener
The Orakl Network API holds information about all listeners. The command below adds a single VRF listener to the Orakl Network state to listen on vrfCoordinatorAddress
for RandomWordsRequested
event. The chain
parameter specifies a chain on which we expect to operate with the Orakl Network VRF Listener.
example
Reporter
The Orakl Network API holds information about all reporters. The command below adds a single VRF reporter to the Orakl Network state to report to oracleAddress
. The chain parameter specifies a chain on which we expect to operate. Reporter is defined by an address
and a privateKey
parameters.
example
VRF Keys
To be able to run VRF as a node operator, one must have registered VRF keys in VRFCoordinator
, and VRF keys has to be in Orakl Network state as well. VRF worker will load them from the Orakl Network API when it is launched.
If you do not have VRF keys, you can generate them with the Orakl Network CLI using the following command.
The output of generated command will be similar to the one below, but including the keys on the right side of the keys (sk
, pk
, pkX
,pkY
, and keyHash
). VRF keys are generated randomly, therefore every time you call the keygen
command, you receive a different output. sk
represents a secret key which is used to generate the VRF beta
and pi
. This secret key should never be shared with anybody except the required personnel.
To store VRF keys in Orakl Network state use orakl-cli vrf insert
command. Parameter --chain
corresponds to the network name to which VRF keys will be associated.
Configuration
Before we launch the Orakl Network VRF, we must specify several environment variables. The environment variables are automatically loaded from a .env
file.
NODE_ENV=production
CHAIN
PROVIDER_URL
ORAKL_NETWORK_API_URL
LOG_LEVEL
REDIS_HOST
REDIS_PORT
HEALTH_CHECK_PORT
SLACK_WEBHOOK_URL
The Orakl Network VRF is implemented in Node.js which uses NODE_ENV
environment variable to signal the execution environment (e.g. production
, development
). Setting the environment to production
generally ensures that logging is kept to a minimum, and more caching levels take place to optimize performance.
CHAIN
environment variable specifies on which chain the Orakl Network VRF will be running, and which resources will be collected from the Orakl Network API.
PROVIDER_URL
defines an URL string representing a JSON-RPC endpoint that listener and reporter communicate through.
ORAKL_NETWORK_API_URL
corresponds to url where the Orakl Network API is running. The Orakl Network API interface is used to access Orakl Network state such as listener and VRF key configuration.
Setting a level of logs emitted by a running instance is set through LOG_LEVEL
environment variable, and can be one of the following: error
, warning
, info
, debug
and trace
, ordered from the most restrictive to the least. By selecting any of the available options you subscribe to the specified level and all levels with lower restrictiveness.
REDIS_HOST
and REDIS_PORT
represent host and port of Redis to which all Orakl Network VRF microservices connect. The default values are localhost
and 6379
, respectively.
The Orakl Network VRF does not offer a rich REST API, but defines a health check endpoint (/
) served under a port denoted as HEALTH_CHECK_PORT
.
Errors and warnings emitted by the Orakl Network VRF can be sent to Slack channels through a slack webhook. The webhook URL can be set with the SLACK_WEBOOK_URL
environment variable.
Launch
Before launching the VRF solution, the Orakl Network API has to be accessible from the Orakl Network VRF to load VRF keys, and listener settings.
After the Orakl Network API is healthy, launch the VRF service, which consists of listener, worker, and reporter microservices, with the command below. Microservices communicate with each other through the BullMQ - job queue.
Run in dev mode through the following command:
It's also possible to run the microservices separately in any arbitrary order:
Quick launch with Docker
From orakl repository's root, run the following command to build all images:
Set wallet credentials, ADDRESS
and PRIVATE_KEY
values, in the .core-cli-contracts.env file. Keep in mind that the default chain is localhost
. If changes are required, update CHAIN
(other options being baobab
and cypress
) and PROVIDER_URL
values. Note that if the chain is not localhost
, Coordinator
and Prepayment
contracts won't be deployed. Instead, Bisonai's already deployed contract addresses will be used. After setting the appropriate .env
values, run the following command to start the VRF service:
Note that the current docker implementation is designed to run a single service, either rr
or vrf
, at a time. Therefore, it's highly recommended to add --force-recreate
when running docker-compose up
command. That will restart all containers thus removing all the modified data in those containers.
Here is what happens after the above command is run:
api
,postgres
,redis
, andjson-rpc
services will start as separate docker containerspostgres
will get populated with necessary data:chains
services
vrf keys
listener (after contracts are deployed)
reporter (after contracts are deployed)
migration files in
contracts/v0.1/migration/
get updated with provided keys and other valuesif the chain is
localhost
:contracts/v0.1/hardhat.config.cjs
file gets updated withPROVIDER_URL
relevant coordinator and prepayment contracts get deployed
Architecture
Last updated