Qortal Project

The future of blockchain platforms

User Tools

Site Tools


Sidebar

Qortal Project Wiki

Important Notices

Introduction

Project Model

Minting

Communications

Trade Portal

Data Hosting

Voting System

Hardware

Frequently Asked Questions (FAQ)

How-To Guides

Node Setup Guides

Third-Party Services

qdn_updates

QDN Updates

Listed from most recent to oldest:

Notes From CalDescent 11/19/21

The idea of a separate "data node" was a possibility in the early stages of development, but we have now reached the stage where it's all merged into one. So every node will be "data enabled", they just won't participate in the hosting unless they view or follow. You won't have to run a separate node for data. And it won't matter if you're a minter or not, or whether you're running a full node or top-only node. It works in all scenarios. The storage location for the data is configurable so you can host the data on a separate drive if you want to.

In v3.0, there will be no rewards for hosting. That part is going to require a lot of thought and discussion, as it is very difficult to prove that a node is hosting a certain amount of data. Storj does it using audits, but these are centralized and no good for Qortal. It may be possible to do audits in a decentralized way, but we can look into that later. I expect we will ultimately have a "data market" where people can pay QORT to have their content hosted on more nodes (and node owners would essentially sell their space at the price they choose. Similar to how Sia does it).

In terms of allocating block rewards for data hosts, this is only possible if we can create a publicly verifiable association between a node and a minting key, and also a publicly verifiable way to prove that a node is hosting the data it claims to be. At that point I guess it's up to the community to decide on whether modifying block reward distribution to include data hosts is the right thing to do or not. I doubt it's worth discussing in too much depth yet as we are a long way away from being able to implement anything like that. These concepts will take a lot of work to create solutions for. On the plus side, any progress we make with this can also be applied to reduce the gaming of the sponsorship system.

There are no limits to the file formats that can be hosted. The core does restrict the types of data that can be used in certain cases - e.g. listing metadata has to be in a certain format, thumbnails have size limits, playlists will require a certain data structure to be adhered to, etc etc. But in terms of just hosting a file to be shared, there are no limits other then the total file size limit which is several hundred megabytes. Exact limit TBC.

We have to limit the total file size as otherwise the transaction size gets too big. This is because we store all the chunk hashes in the on chain transaction. In the future if we increase the size of each chunk, we can increase the total file size limit by the same proportion (or, we can fit more data transactions in each block if we increase chunk sizes and keep the total file size limit the same).

Notes From CalDescent 11/17/21

Each website is broken up into chunks (currently 1MB each but will probably increase to 2MB+ at some point), and these chunks can be distributed in any way we choose. You could have one chunk per node, or all chunks on all nodes, or anything in between. The system pieces them together from all over the world when it needs to build the website back into its original state. The actual logic of the distribution of chunks is yet to be implemented; it currently attempts to store all chunks on each node that follows the name or views the content. But soon this will become more intelligent. I imagine we will have a total storage limit per name that is being followed. Once we reach the limit, we'll start dropping chunks evenly so that no-one is holding the complete set. The node owner will have full oversight. By default, no content will be hosted by each node. They will only store the hashes of the data which are on the chain that everyone has a copy of. Once a node owner either views some content, or follows a content creator, they will then start hosting a copy of the content that is viewed or followed. The idea behind following is that you can support creators on the network that you trust, without having to view every single piece of their content.

With concern to transferring illegal content across nodes, this was probably the #1 consideration of mine when I started working on the data hosting. I've given it more attention than any other aspect of the project. As it's very important to me that I don't have all sorts of unknown/unvetted stuff going through my nodes and internet connection. So the whole thing will be opt in. The default settings will be:

- You host individual data resources that you view

- You also host complete datasets from creators that you follow

- Your node won't store or relay data unless it meets the above criteria

We may have an opt-in setting to allow you to become a "relay" - this allows data outside of the criteria to be relayed via your node but not stored. That will be disabled by default so it is entirely an opt-in feature. Relay nodes will help the network function more smoothly, so anyone that does it will be helping content to flow more easily. The whole thing is going to be a bit of an experiment. But I think we've covered enough of the basics for it to succeed, after a few rounds of fixes of course.

Notes From CalDescent 10/26/21

The files are "grouped" by the transaction that they are contained in. So in the case of a website, you would have to download all files relating to a transaction (and potentially past transactions) in order to access any file contained within it. For example:

A user wants to view an image at /site/QortalDemo/wp-content/uploads/2021/02/qortal-logo.png (real image is here: http://node1.qortal.uk:12393/site/QortalDemo/wp-content/uploads/2021/02/qortal-logo.png)

In order to do this, they need to build the QortalDemo website resource.

The build system in the node gets notified that they have requested the file, and it goes and looks up arbitrary transactions for name QortalDemo with the service WEBSITE.

It may find just a recent PUT transaction, which contains the entire site, or it may find a PUT followed by multiple PATCH transactions.

Once it knows what layers are needed, it puts out requests to the network to retrieve the encrypted data files associated with these layers. For each file it checks against the hashes on chain to ensure they are correct.

Once it has all the files, it builds the website resource by decrypting+extracting each layer, applying the patches, and validating the hashes of the end result to ensure they match what was uploaded.

So the first time a website is accessed, it can take some time to build it depending on the complexity of the site. During this time a loading screen is shown which we can brand with the Qortal logo. Once built, the site is cached on the node so that subsequent accesses are instant (as seen in the QortalDemo and QortalWikiDemo).

From that point onwards, the node will be hosting a copy of all the files so that other nodes can request them and repeat the above process. They could optionally delete the files for some of the layers or even just a few chunks for a single layer, and they would still be helping the network by serving fragments of the complete site, in the same way it works with torrents.

I’m planning on having a “storage policy” setting with various different possible values such as following only / viewed only / following and viewed / all / none. Then we can default to following and viewed, and people have the option of increasing or reducing the scope of their storage. I’ll be working on this part in the next few weeks.

Notes From CalDescent 8/18/21

For anyone interested in checking out the progress, you can do so here: https://github.com/CalDescent14/qortal-data/commits/master

You will see that we already have a fully operational decentralized storage system.

The most recent feature introduces layering - similar to Docker - so that on-chain updates can be published to existing sites/resources without having to upload the entire file structure again. We can also now access sites via registered names (will demo this shortly).

qdn_updates.txt · Last modified: 07/21/2022 05:35 by gfactor