Quantcast
Channel: Stories by Oleg Kondrakhanov on Medium

Introducing JustSmartContracts.dev — web tool for interacting with Ethereum smart contracts

$
0
0

Introducing JustSmartContracts.dev

A web tool for interacting with Ethereum smart contracts

Here I’d like to introduce a website developed by me and my friend in order to provide a better experience for Ethereum smart contracts developers and blockchain enthusiasts — https://justsmartcontracts.dev/. But first I’m going to explain the reasons lead to creating of this website.

Why did we need it?

For the last two years, I’ve been closely working on a project based on the Ethereum blockchain. That job includes both developing smart contracts and performing various off-chain interactions. The process of developing a smart contract itself is pretty straightforward. Write code- truffle compile- truffle test-repeat. However, the interaction with already deployed contract and the deployment procedure itself always seemed a kind of annoying (especially if you need to sign transactions on a secure cold machine). I felt lack of user-oriented services and didn't quite like those already existing.
Besides, I wanted to try front-end development myself, and Ethereum-oriented website seemed to be a perfect scenario for this. Initially, we planned to implement this set of features to be simple yet useful.

  1. Store ABI+address records in the browser. No mandatory registration or something like this.
  2. Quickly switch between networks: Mainnet/Ropsten/etc or your own localhost testnet. Just like good old vintage MyEtherWallet.
  3. Group similar contract parts together: view functions that can be called without transaction, common functions that should be called via transaction, events.
  4. User-friendly search within contract events.
  5. User-friendly deploy interface.

How to use JustSmartContracts

For the demonstration, I’ll use a special test contract deployed on the Ropsten testnet. Its source code is available here. This contract contains everything we need for tests and demonstrations: public data, view functions, payable functions, transaction functions, several events and constructor with parameters. It has no other real purpose except tests.

Deploy

For now, JustSmartContracts uses only Metamask to sign transactions. Alternatively, you have an option to download any transaction and sign it using whatever you want.
Let’s first select Deploy tab …

…and specify the byte-code and the ABI. You can copy/paste this information from the truffle build file or just drag-n-drop that file onto the page. After ABI is entered, we can specify constructor parameters

Pressing Generate button reveals the transaction interface. Then you only need to enter the From address. You can do it manually or use the Metamask button that copies an address from your active Metamask account. Sign the transaction and wait until it is confirmed.

Interact with contract

Let’s find deployed contract address using any blockchain explorer you like. Then we select the Browser tab and Add contract. Don’t forget to keep current network set to Ropsten.

It is important to enter the correct network id. For Mainnet it is 1, for Ropsten it is 3, etc

All the contract’s data is split into four categories:

1.Properties. This includes basic information like the contract’s address and its Ether balance along with public data variables and view function with no parameters. In other words, everything that can be queried without additional user input.

2. Calls. This includes view functions with parameters. In other words, functions that can be called without sending a transaction but requiring additional user input.

3. Operations. These are functions of contract that require executing the transaction. Let’s look at the contract’s payable method, for instance.

Pressing the Generate button opens the same transaction interface we used earlier to deploy the contract. Only this time it is also populated with To address (the contract) and Ether value to send.

Let’s Sign and send the transaction and after it is confirmed we’ll inspect the contract’s events.

4. Events. Obviously, all contract’s events fall into this category. Let’s find the one that reflects the operation we’ve just performed — EtherPaid. We can use a filter for an indexed parameter to narrow the search down.

Other features

JustSmartContracts uses the browser’s local storage to store your contracts and custom network information. So, if you or your browser clears the storage, your contracts will be lost. Also, it is worth mentioning that JustSmartContracts is still in its early stage of development so some minor bugs could possibly occur.

Conclusion

We hope https://justsmartcontracts.dev will show itself as a useful and convenient tool for Ethereum and Solidity developers. We also definitely can see room for improvements and new features and try to implement them soon.


Introducing JustSmartContracts.dev — web tool for interacting with Ethereum smart contracts was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.


Beware the Solidity enums

Serving a Node.js Express App From a Subfolder — a Routing Lifehack

Caching Ethereum events with MySQL

$
0
0

In this article, I am going to demonstrate a simple approach to caching Ethereum events. Here I won’t describe what events are as there are a lot of articles covering that topic (here is the perfect one). I’ll just say that typically we use events for some off-chain operations, for examples tracking token’s transfers or retrieving the filtered list of particular transactions, just like a good old SQL query.

Let’s suppose we want to make a website that tracks some token transfers, a kind of Etherscan. We definitely need such simple operations like:

  • get all the token transfers
  • get transfers made from particular Ethereum address
  • get transfers made to particular Ethereum address
  • get transfers that are above or below a particular amount
  • get transfers within particular time frames

What we have in web3 now is getPastEvents method, the example usage of which is

let events = await contract.getPastEvents(  
"Transfer",
{
filter: {from:'0x0123456789abcdef0123456789abcdef01234567'},
fromBlock: 0,
toBlock: 'latest'
}
);

The main issue with this approach is it can be slow as blockchain grows, especially if you don’t run your own Ethereum node and use public providers like Infura or MyEtherApi.

The next thing — it is almost impossible to implement some tricky queries as filter object’s functionality is quite limited.

Besides, events already written to the blockchain can’t be changed, only new records can be added with time. This and other facts make events a perfect target for caching.

Database choice

In this example, we’ll use MySQL as a database for holding our event records. MySQL has capabilities to store raw JSON and then compose queries using that JSON object’s properties as if they were usual SQL columns.

What should we store?

Let’s take a closer look at the result of getPastEvents method to realize what data we work with. I took some Binance coin transfers as an example. Each event object has the following structure:

{  
"address": "0xB8c77482e45F1F44dE1745F52C74426C631bDD52",
"blockHash": "0x19e0d4c4cce0ed7c429b627fc6c5cc5c223c2e9218e233ab2b72e64e817cfcc2",
"blockNumber": 6813922,
"logIndex": 111,
"removed": false,
"transactionHash": "0x32d660785112b084135e0d4d2b53c0d67e851b735eacb486e44e52b7945b857d",
"transactionIndex": 84,
"id": "log_5ea90f71",
"returnValues": {
"0": "0x6ACe7E0abCF0dA3097Fa7155149dccd51E20EF82",
"1": "0xAc951701644936aA95C80ED9f358Fa28f8369075",
"2": "1000553200000000000",
"from": "0x6ACe7E0abCF0dA3097Fa7155149dccd51E20EF82",
"to": "0xAc951701644936aA95C80ED9f358Fa28f8369075",
"value": "1000553200000000000"
},
"event": "Transfer",
"signature": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"raw": {
"data": "0x0000000000000000000000000000000000000000000000000de2add590e16000",
"topics": [
"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
"0x0000000000000000000000006ace7e0abcf0da3097fa7155149dccd51e20ef82",
"0x000000000000000000000000ac951701644936aa95c80ed9f358fa28f8369075
]
}
}

As you can see, the event arguments are stored in returnValues property. blockNumber , transactionHash, logIndex might be useful too as I’ll show you later.

Our goal is to write those JSON objects to the database and to implement easy access methods that can replace standard web3’s getPastEvents method seamlessly.

Here is the SQL script for creating the Transfer table.

CREATE  TABLE  `eth_cache`.`transfer` (  
`id` INT NOT NULL AUTO_INCREMENT,
`json` JSON NOT NULL,
`from` VARCHAR(45) GENERATED ALWAYS AS (json_unquote(json_extract(`json`,'$.returnValues.from'))) VIRTUAL,
`to` VARCHAR(45) GENERATED ALWAYS AS (json_unquote(json_extract(`json`,'$.returnValues.to'))) VIRTUAL,
`value` VARCHAR(45) GENERATED ALWAYS AS (json_unquote(json_extract(`json`,'$.returnValues.value'))) VIRTUAL,
`txHash` VARCHAR(66) GENERATED ALWAYS AS (json_unquote(json_extract(`json`,'$.transactionHash'))) VIRTUAL,
`logIndex` INT GENERATED ALWAYS AS (json_unquote(json_extract(`json`,'$.logIndex'))),
PRIMARY KEY (`id`),
UNIQUE INDEX `IDX_UNIQUE` (`txHash` ASC, `logIndex` ASC));

Some important things to explain:

  1. json column is created as JSON type. This allows us to create auto-generated columns using special syntax
  2. from, to, value — these are auto-generated columns. The expression could seem complex at first, but it is really simple in fact. For example, from column value equals to returnValues.from property of the object stored in json column.
  3. txHash and logIndex. Combined together these properties identify every event object. We need those to make the unique index for a row, thus preventing occasional duplication of events.

Optionally we could also add database index for increasing the performance. For example, for the to column

ALTER  TABLE  `eth_cache`.`transfer`  
ADD INDEX `IDX_TO` (`to` ASC);

Implementation

Prerequisites

  1. Node.js. I use version 8.4.0.
  2. Web3 npm package to interact with the blockchain. We need specific version 1.0.0-beta.35. The usage of latest version beta.36 resulted in the ‘Returned values aren’t valid, did it run Out of Gas’ error when trying to retrieve some events.
npm install web3@1.0.0-beta.35 --save

3. To work with MySQL database in JavaScript we should install mysql package

npm install mysql --save

4. And the last — MySQL server. It is worth mentioning that we’ll use MySQL 5.7 as the latest 8.0 version doesn’t seem to be compatible with the mysql package (it gave me strange error ER_NOT_SUPPORTED_AUTH_MODE while trying to connect).

Interacting with MySQL

We’ll utilize connection pool to make queries for this example.

const mysql = require('mysql');  
let pool = mysql.createPool({
connectionLimit: <connection limit>,
host: <database server address>,
user: <user>,
password: <password>,
database: <name of your database schema>
});

It would be more convenient to use a promisified version of query method

const util = require('util');  
pool.query = util.promisify(pool.query);

Now we can use the following code to insert a record into the transfer table created before.

async  function  writeEvent(event) {  
try {
await pool.query(
`Insert into \`transfer\` (\`json\`) VALUES (\'${JSON.stringify(event)}\')`
);
} catch(e) {
//if it's 'duplicate record' error, do nothing;
// otherwise rethrow
if(e.code != 'ER_DUP_ENTRY') {
throw e;
}
}
}

Here we also check for possible duplicate rows insertion. Now we don’t want to do anything special in that case, probably we’ve already written those duplicate events earlier or something like this. So we just consider this kind of exceptions handled.

Base caching function.

Let’s construct a contract object to retrieve events from.

let contract = new web3.eth.Contract(abi, <contractAddress>);

We can include only Transfer event interface in the abi parameter, like this:

let abi = [{  
"anonymous": false,
"inputs": [
{ "indexed": true, "name": "from", "type": "address" },
{ "indexed": true, "name": "to", "type": "address" },
{ "indexed": false, "name": "value", "type": "uint256" }
],
"name": "Transfer",
"type": "event"
}];

This is the base version of the caching function. First, we get event objects, then write them to the database, one by one.

async  function  cacheEvents(fromBlock, toBlock) {  
let events = await contract.getPastEvents(
"Transfer",
{ filter: {}, fromBlock: fromBlock, toBlock: toBlock }
);

for(let event in events) {
await writeEvent(event);
}
}

Regular blockchain scanning

Now let’s expand this to a simple background script that constantly scans blockchain for the events emitted.

Some utility functions

const timeout = 30;  
function sleep(milliseconds) {
return new Promise(resolve =>
setTimeout(resolve, milliseconds)
);
}

async function poll (fn) {
await fn();
await sleep(timeout*1000);
await poll(fn);
}

The first one is simple async/await implementation of setTimeout. The second one serves for infinite periodic calls of fn — the worker function.

With these helper functions, our background scanner looks quite simple

async  function  scan() {  
const MaxBlockRange = 500000;
let latestCachedBlock = 0; // latest block written to database
let latestEthBlock = 0; // latest block in blockchain

await poll(async () => {
try {
//get latest block written to the blockchain
latestEthBlock = await web3.eth.getBlockNumber();

//divide huge block ranges to smaller chunks,
// of say 500000 blocks max
latestEthBlock = Math.min(
latestEthBlock,
latestCachedBlock + MaxBlockRange
);

//if it is greater than cached block, search for events
if(latestEthBlock > latestCachedBlock) {
await cacheEvents(latestCachedBlock, latestEthBlock);

//if everything is OK, update cached block value
latestCachedBlock = latestEthBlock + 1;
}
} catch (e) {
//we might want to add some simple logging here
console.log(e.toString());
}
});
}

Let me explain that ‘latestEthBlock + 1’ thing. Web3’s getPastEvents(fromBlock, toBlock) returns events written within that [from, to] range, including the borders. So without this incrementing the next cacheEvents call will again return the events written into latestEthBlock as a part of the result.

Though duplicate events won’t be inserted into database due to unique index implemented, we still don’t want this excess work to be done.

This implementation should be pretty much enough for a simple background scanner. However, there is always room for improvement. We’ll return to it a bit later. Now let’s take a quick look at what we can do now with that data.

Retrieving the events

Here is an example of the function to select transfers made from a particular address

async  function  selectTransfersFrom(sender) {  
return await pool.query(`select json from transfer t where t.from = \'${sender}\'`);
}

We query the database using the generated from column. The most notable part here is that the result of the function looks just like the result of web3’s getPastEvents. It makes refactoring the current code a lot easier.

Further improvements

The event object contains a lot of properties that might be totally useless for your application. It would be better to remove the excess before writing to the database. That way we are saving a lot of space.

async  function  writeEvent(event) {  
try {
delete event.raw;
delete event.event;
delete event.blockHash;
delete event.type;
delete event.id;
delete event.signature;

await pool.query(
`Insert into \`transfer\` (\`json\`) VALUES (\'${JSON.stringify(event)}\')`
);
} catch(e) {
// if it's 'duplicate record' error, do nothing,
// otherwise rethrow
if(e.code != 'ER_DUP_ENTRY') {
throw e;
}
}
}

As you might also have noticed, the current version of the scanner begins with block #0 each time it is restarted. While scanning all the way to current block it will try to insert duplicate records into the database. We can eliminate that excess work by querying the database for the latest cached block.

It would be also nice to start scanning not from the block #0, but at least from the block when the contract was deployed. For simplicity, you might get this information using etherscan.io.

async function getLatestCachedBlock() {  
const defaultInitialBlock = <your contract’s deployment block>;

let dbResult = await pool.query(
'select json_unquote(json_extract(`json`,\'$.blockNumber\')) \
as block from transfer order by id desc limit 1'
);

return dbResult.length > 0 ?
parseInt(dbResult[0].block) :
defaultInitialBlock;
}

Here we again use MySQL json functions to get the blockNumber property of the event object.

Then replace the old piece of the scan function

let latestCachedBlock = 0;

with the new one

let latestCachedBlock = await getLatestCachedBlock();

Conclusion

Finally, we’ve created a simple but working event scanner that continuously caches events into MySQL database. If you have any questions please feel free to contact me and I’ll try to answer.

Complete source code is available here https://github.com/olekon/p1_eth_caching.

Get Best Software Deals Directly In Your Inbox

Caching Ethereum events with MySQL was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

How to synchronize Strapi cron tasks

$
0
0
Photo by Franco Atkins from Pexels

Hello and let’s get straight to the point.

Strapi is great headless CMS. Besides, its cron module can be so useful in certain cases, for example, regular fetching some data from 3rd-party API. But there is a little problem.

A little problem

Everything works fine if we stick to a single-process configuration, i.e. single database and a single Strapi app instance using it. However today we use containers and orchestration tools and infrastructure can be scaled quite easy, multiple application instances can be created in the blink of an eye. So the code should be written with these things in mind.

Imagine we run 3 Strapi instances as a website back-end. 3 instances mean 3 separate cron tasks running at the same time. Do we really need all 3 of them? And what’s more important — should we expect any bug crawling here?

Here is a real-world case as an example. We needed to add internationalization for our website and that requirement also included translation of CMS-stored content. We chose Lokalise.com as a localization platform as it allows involving translators from outside the company staff without granting them access to a CMS itself. The plan was:

  1. English (default language) content is stored directly in Strapi database so content managers could edit it via the admin panel just like they used to.
  2. After content is edited, Strapi uploads changes to Lokalise.com so translators could work on it.
  3. Strapi cron task fetches translated content on a regular basis and stores it in special Locale model.
  4. A Strapi middleware checks requests’ query parameters and substitutes text content using the Locale model if non-default language was requested.

So cron module looked something like this

After we deployed all this to a staging environment I checked logs and what I found was that instead of one cron task launching every 10 minutes there were three of them. What’s more, two of them were throwing exceptions as Lokalise.com API doesn’t allow simultaneous requests with the same API token. We got three cron tasks because there are three Strapi application instances in the environment, that’s the answer.

So now I needed to synchronize several cron tasks to allow only one to be executed. And no, I didn’t plan to give up Strapi cron module entirely, replacing it by system cron or something similar. Strapi cron still has access to built-in strapi object, its services, controllers and models which is a nice benefit.

Solution

In a nutshell, we’ll be using a special Lock model and block access to it while a task is in progress.

A Lock model

First, let’s create this model. It is pretty simple, there is only one text field — Task, which is a Task we would like to acquire a lock for. Here is Strapi model config, all routes are default.

Acquiring the lock

Next part is a bit tricky. Our database is PostgreSQL so we should use its connector knex directly to write a locking code. Luckily Strapi provides a convenient interface to this connector as strapi.connections.default.

I extracted the function to a standalone module.

This lockTask function has only two arguments. First one is the name of the task to acquire a lock for. It corresponds to a Name field of the Lock Strapi model. The second - task is an async function called in case a lock is acquired. At the beginning we should get knex object as

Then we call knex.transaction to begin a transaction and pass a transaction handler function as its only argument. The locking job happens here

We are trying to select a locks table row with a specific Task value. Calling transacting(t) signifies that the query should be a part of transaction t. (You can read here for better understanding). We also specify forUpdate clause to indicate that no other similar query should be allowed while transaction is in progress. See PostgreSQL docs

FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, or SELECT FOR UPDATE of these rows will be blocked until the current transaction ends.

And finally we add noWait option to prevent waiting for other transactions to be finished

With NOWAIT, the statement reports an error, rather than waiting, if a selected row cannot be locked immediately.

To sum up, now only one Strapi app instance would be able to get past this query, i.e. obtain the lock. All other would go straight to the catch block.

The first time we lock a task, there is no corresponding Lock record so it must be created

However as there was no actual lock first time, all of Strapi app instances would be able to execute this insert query. That's why Task field of Lock model should be declared as unique, so no duplicates anyway.

Now the time for task itself to be processed

And that’s all.

Wrapping cron tasks …

Now we need just to wrap our cron task with the locking function

… and non-cron tasks

That approach might also be useful if you use Strapi bootstrap function and want to perform some work only once.

After these fixes were deployed to a staging environment and I checked logs once again, they showed only one application instance was performing the actual task. Just as planned.





Latest Images