Sunday, May 27, 2018

QUADRANT’S TESTNET IS LIVE!


Quadrant’s Testnet and Their first feature -data stamping- has been operating successfully over the last few weeks, stamping a client’s live data feed. Data stamping is designed to provide data authenticity from the data source to the data consumer, which acts as the foundational layer of trust in the network. By putting the data signature on the block, the Quadrant Protocol will be able to track the movements of the data and understand where there have been manipulations. This is set up to ensure data authenticity and help track provenance for compliance applications.
They’ve reached Their goals during the extended tests –working with a real prototype for one of Their clients- so they’re very confident in being able to launch the mainnet as planned. If your company would like to be some of the first to be using Quadrant on launch reach out to use at Their contact details below.
Facts related to Quadrant Protocol:
Quadrant’s blockchain is up and running, with real client data being stamped
Throughput tests show that Quadrant would enable tens of thousands of data feeds to stamp the signatures of 1.92 TB worth of data per minute onto the chain.
The Producer Client is handling files upto 500MB per stamp.
The Quadrant Producer Client is stamping a data pipe that has 300–400MB every 5 minutes.
Depending on the data, clients could stamp their data at a cadence that best suits their needs. From Their research they see feeds being at sizes of 50MB to 400MB being an ideal balance between # of stamps and data stamp processing rates.
Quadrant is an Ethereum sidechain with Proof of Authority consensus
It is a highly enterprise secured sidechain that combines speed, security, and cost efficiency.
Testnet Scenario
Their Producer Client processes a data feed that delivers 300–400MB every 5 minutes.
When the data is saved into the S3 buckets, the Quadrant Producer Client is triggered and hashes the data
This data hash along with the meta data of the feed is combined and sent to be stamped into the Quadrant network.
The Data Consumer, upon receiving the data, runs the Quadrant Consumer Client, hashes the data, and verifies the data on the chain prior to consumption.
Quadrant Producer Client — Data Stamping
As soon Their clients need to stamp and validate their data, they can execute Their data stamping mechanism. Click to learn more about data stamping. A unique ‘transactionId’ is created to indicate the transaction, while a ‘blockHash’/’blockNumber’ is created as reference of the block which includes the stamping transaction which includes the hash and meta data, so it is allowing at a later stage to compare and conclude the authenticity of the data they have transitioned. As each block has a timestamp identifying the time the data have been stamped.
Blocks are created every five seconds. A user can search the unique block number/block hash/Transaction hash/Account address to proceed with the search feature. Upon exploring the blocks, users will be able to view all the transactions inside the block, the sending/receiving smart contract/account addresses and total gas consumption. For illustrative purposes, the above visual block explore is shown.
Data Consumer Client
The data can identify by running a Quadrant Consumer Client which searches the data hash to make sure that the stamping transaction is authentic, to which block stamp has been occurred and extensive details related to the transaction.
This simple yet powerful first feature of Quadrant is the base foundation what is to come ahead. Next, they will be enabling the next generation of Data Smart Contracts to be deployed into the network.
Technical Next Steps
With the throughput and real-client use case validated, Their next steps will be to continue test the robustness of the Producer Client against other data feed scenarios and launch the Mainnet.
Data Vendors For many Data Vendors, the path to data monetisation is a journey rather than a set of linear steps. Initially, Data Vendors struggle to find a proper product-market fit for their data. They create multiple products over time until one proves successful to a consumer group. When this happens, they then seek to maximise revenue from the product.
While the replication and distribution of data is relatively cheap, production costs can be high. It is essential that Data Vendors are able to cover the capital and input costs incurred during the creation and productising of their data assets because once the data leaves their walls, it can be duplicated at almost no cost.
Data Vendors have no desire to incur significant costs to create a data product, only to have it duplicated and made available by competitors at a lower cost. They want to receive fair pay for the products that they produce and want their products to be used in ways that even they could not think of in order to maximise their revenue. They would also like to know who is utilising their data because it helps them to understand the different ways in which their data can be used; it can even motivate them to enrich their data further.
Atomic Data Producers (ADPs) At this level of the data value chain, the biggest problem is that the ADPs are not paid their fair share of the revenue made by the data that they produce. Individual data has little value on its own. Its real value is derived when it is combined with other data sets. As a result, most data producers will sell their data up the value chain to aggregators and resellers who can sell interesting data sets alongside one another to multiply the impact of the insights.
The problem for ADPs is that they receive payment only once, no matter how many times the data is resold via the resellers and aggregators. Each additional sale beyond the initial transaction (between the ADP and the reseller) does not translate into revenue for the ADP.
That is not the only thing working against ADPs. With existing data transaction architectures, there are prohibitive costs incurred in compensating ADPs for the data that they provide. Take a CSV file that has thousands of medical prescriptions sourced from multiple ADPs as an example. Figuring out the exact percentage of revenue to share amongst the contributing ADPs is inherently cumbersome and expensive.
please visit links below
My
Bitcointalk Username:kalindu

No comments:

Post a Comment