This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
qort_data_hosting_updates [11/19/2021 13:38] – [Notes From CalDescent 11/19/21] gfactor | qort_data_hosting_updates [07/21/2022 05:29] (current) – gfactor | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== QORT Data Hosting Updates ====== | + | This page has moved: [[QDN Updates]] |
- | {{: | + | |
- | + | ||
- | ===== Notes From CalDescent 8/18/22 ===== | + | |
- | + | ||
- | For anyone interested in checking out the progress, you can do so here: https:// | + | |
- | + | ||
- | You will see that we already have a fully operational decentralized storage system. | + | |
- | + | ||
- | The most recent feature introduces layering - similar to Docker - so that on-chain updates can be published to existing sites/ | + | |
- | + | ||
- | ===== Notes From CalDescent 10/26/21 ===== | + | |
- | + | ||
- | The files are " | + | |
- | + | ||
- | A user wants to view an image at / | + | |
- | + | ||
- | In order to do this, they need to build the QortalDemo website resource. | + | |
- | + | ||
- | The build system in the node gets notified that they have requested the file, and it goes and looks up arbitrary transactions for name QortalDemo with the service WEBSITE. | + | |
- | + | ||
- | It may find just a recent PUT transaction, | + | |
- | + | ||
- | Once it knows what layers are needed, it puts out requests to the network to retrieve the encrypted data files associated with these layers. For each file it checks against the hashes on chain to ensure they are correct. | + | |
- | + | ||
- | Once it has all the files, it builds the website resource by decrypting+extracting each layer, applying the patches, and validating the hashes of the end result to ensure they match what was uploaded. | + | |
- | + | ||
- | So the first time a website is accessed, it can take some time to build it depending on the complexity of the site. During this time a loading screen is shown which we can brand with the Qortal logo. Once built, the site is cached on the node so that subsequent accesses are instant (as seen in the QortalDemo and QortalWikiDemo). | + | |
- | + | ||
- | From that point onwards, the node will be hosting a copy of all the files so that other nodes can request them and repeat the above process. They could optionally delete the files for some of the layers or even just a few chunks for a single layer, and they would still be helping the network by serving fragments of the complete site, in the same way it works with torrents. | + | |
- | + | ||
- | I’m planning on having a “storage policy” setting with various different possible values such as following only / viewed only / following and viewed / all / none. Then we can default to following and viewed, and people have the option of increasing or reducing the scope of their storage. I’ll be working on this part in the next few weeks. | + | |
- | + | ||
- | ===== Notes From CalDescent 11/17/21 ===== | + | |
- | + | ||
- | Each website is broken up into chunks (currently 1MB each but will probably increase to 2MB+ at some point), and these chunks can be distributed in any way we choose. You could have one chunk per node, or all chunks on all nodes, or anything in between. The system pieces them together from all over the world when it needs to build the website back into its original state. The actual logic of the distribution of chunks is yet to be implemented; | + | |
- | + | ||
- | With concern to transferring illegal content across nodes, this was probably the #1 consideration of mine when I started working on the data hosting. I've given it more attention than any other aspect of the project. As it's very important to me that I don't have all sorts of unknown/ | + | |
- | + | ||
- | - You host individual data resources that you view | + | |
- | + | ||
- | - You also host complete datasets from creators that you follow | + | |
- | + | ||
- | - Your node won't store or relay data unless it meets the above criteria | + | |
- | + | ||
- | We may have an opt-in setting to allow you to become a " | + | |
- | + | ||
- | ===== Notes From CalDescent 11/19/21 ===== | + | |
- | + | ||
- | The idea of a separate "data node" was a possibility in the early stages of development, | + | |
- | + | ||
- | In v3.0, there will be no rewards for hosting. That part is going to require a lot of thought and discussion, as it is very difficult to prove that a node is hosting a certain amount of data. Storj does it using audits, but these are centralized and no good for Qortal. It may be possible to do audits in a decentralized way, but we can look into that later. | + | |
- | I expect we will ultimately have a "data market" | + | |
- | + | ||
- | In terms of allocating block rewards for data hosts, this is only possible if we can create a publicly verifiable association between a node and a minting key, and also a publicly verifiable way to prove that a node is hosting the data it claims to be. At that point I guess it's up to the community to decide on whether modifying block reward distribution to include data hosts is the right thing to do or not. I doubt it's worth discussing in too much depth yet as we are a long way away from being able to implement anything like that. These concepts will take a lot of work to create solutions for. On the plus side, any progress we make with this can also be applied to reduce the gaming of the sponsorship system. | + | |
- | + | ||
- | There are no limits to the file formats that can be hosted. The core does restrict the types of data that can be used in certain cases - e.g. listing metadata has to be in a certain format, thumbnails have size limits, playlists will require a certain data structure to be adhered to, etc etc. But in terms of just hosting a file to be shared, there are no limits other then the total file size limit which is several hundred megabytes. Exact limit TBC. | + | |
- | + | ||
- | We have to limit the total file size as otherwise the transaction size gets too big. This is because we store all the chunk hashes in the on chain transaction. In the future if we increase the size of each chunk, we can increase the total file size limit by the same proportion | + | |
- | (or, we can fit more data transactions in each block if we increase chunk sizes and keep the total file size limit the same). | + |