Qortal Project

The future of blockchain platforms

User Tools

Site Tools


qdn_updates

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
qdn_updates [07/21/2022 05:34] gfactorqdn_updates [09/05/2023 05:21] (current) – Made more generic instead of mentioning my username caldescent
Line 4: Line 4:
 Listed from most recent to oldest: Listed from most recent to oldest:
  
-===== Notes From CalDescent 10/26/21 =====+===== Developer Notes 11/19/21 =====
  
-The files are "groupedby the transaction that they are contained  in. So in the case of website, you would have to download all files relating to transaction (and potentially past transactions) in order to access any file contained within itFor example:+The idea of a separate "data nodewas a possibility in the early stages of development, but we have now reached the stage where it's all merged into one. So every node will be "data enabled", they just won't participate in the hosting unless they view or follow. You won't have to run separate node for data. And it won't matter if you're a minter or notor whether you're running full node or top-only node. It works in all scenarios. The storage location for the data is configurable so you can host the data on a separate drive if you want to.
  
-A user wants to view an image at /site/QortalDemo/wp-content/uploads/2021/02/qortal-logo.png (real image is here: http://node1.qortal.uk:12393/site/QortalDemo/wp-content/uploads/2021/02/qortal-logo.png)+In v3.0, there will be no rewards for hosting. That part is going to require a lot of thought and discussion, as it is very difficult to prove that a node is hosting a certain amount of dataStorj does it using audits, but these are centralized and no good for QortalIt may be possible to do audits in a decentralized way, but we can look into that later.  
 +I expect we will ultimately have a "data market" where people can pay QORT to have their content hosted on more nodes (and node owners would essentially sell their space at the price they chooseSimilar to how Sia does it).
  
-In order to do thisthey need to build the QortalDemo website resource.+In terms of allocating block rewards for data hosts, this is only possible if we can create a publicly verifiable association between a node and a minting key, and also a publicly verifiable way to prove that a node is hosting the data it claims to be. At that point I guess it's up to the community to decide on whether modifying block reward distribution to include data hosts is the right thing to do or not. I doubt it's worth discussing in too much depth yet as we are a long way away from being able to implement anything like that. These concepts will take a lot of work to create solutions for. On the plus sideany progress we make with this can also be applied to reduce the gaming of the sponsorship system
  
-The build system in the node gets notified that they have requested the file, and it goes and looks up arbitrary transactions for name QortalDemo with the service WEBSITE.+There are no limits to the file formats that can be hosted. The core does restrict the types of data that can be used in certain cases - e.g. listing metadata has to be in a certain format, thumbnails have size limits, playlists will require a certain data structure to be adhered to, etc etc. But in terms of just hosting a file to be sharedthere are no limits other then the total file size limit which is several hundred megabytes. Exact limit TBC
  
-It may find just a recent PUT transaction, which contains the entire siteor it may find a PUT followed by multiple PATCH transactions.+We have to limit the total file size as otherwise the transaction size gets too big. This is because we store all the chunk hashes in the on chain transaction. In the future if we increase the size of each chunkwe can increase the total file size limit by the same proportion 
 +(or, we can fit more data transactions in each block if we increase chunk sizes and keep the total file size limit the same).
  
-Once it knows what layers are needed, it puts out requests to the network to retrieve the encrypted data files associated with these layers. For each file it checks against the hashes on chain to ensure they are correct. +===== Developer Notes 11/17/21 =====
- +
-Once it has all the files, it builds the website resource by decrypting+extracting each layer, applying the patches, and validating the hashes of the end result to ensure they match what was uploaded. +
- +
-So the first time a website is accessed, it can take some time to build it depending on the complexity of the site. During this time a loading screen is shown which we can brand with the Qortal logo. Once built, the site is cached on the node so that subsequent accesses are instant (as seen in the QortalDemo and QortalWikiDemo). +
- +
-From that point onwards, the node will be hosting a copy of all the files so that other nodes can request them and repeat the above process. They could optionally delete the files for some of the layers or even just a few chunks for a single layer, and they would still be helping the network by serving fragments of the complete site, in the same way it works with torrents. +
- +
-I’m planning on having a “storage policy” setting with various different possible values such as following only / viewed only / following and viewed / all / none. Then we can default to following and viewed, and people have the option of increasing or reducing the scope of their storage. I’ll be working on this part in the next few weeks. +
- +
-===== Notes From CalDescent 11/17/21 =====+
  
 Each website is broken up into chunks (currently 1MB each but will probably increase to 2MB+ at some point), and these chunks can be distributed in any way we choose. You could have one chunk per node, or all chunks on all nodes, or anything in between. The system pieces them together from all over the world when it needs to build the website back into its original state. The actual logic of the distribution of chunks is yet to be implemented; it currently attempts to store all chunks on each node that follows the name or views the content. But soon this will become more intelligent. I imagine we will have a total storage limit per name that is being followed. Once we reach the limit, we'll start dropping chunks evenly so that no-one is holding the complete set. The node owner will have full oversight. By default, no content will be hosted by each node. They will only store the hashes of the data which are on the chain that everyone has a copy of. Once a node owner either views some content, or follows a content creator, they will then start hosting a copy of the content that is viewed or followed. The idea behind following is that you can support creators on the network that you trust, without having to view every single piece of their content. Each website is broken up into chunks (currently 1MB each but will probably increase to 2MB+ at some point), and these chunks can be distributed in any way we choose. You could have one chunk per node, or all chunks on all nodes, or anything in between. The system pieces them together from all over the world when it needs to build the website back into its original state. The actual logic of the distribution of chunks is yet to be implemented; it currently attempts to store all chunks on each node that follows the name or views the content. But soon this will become more intelligent. I imagine we will have a total storage limit per name that is being followed. Once we reach the limit, we'll start dropping chunks evenly so that no-one is holding the complete set. The node owner will have full oversight. By default, no content will be hosted by each node. They will only store the hashes of the data which are on the chain that everyone has a copy of. Once a node owner either views some content, or follows a content creator, they will then start hosting a copy of the content that is viewed or followed. The idea behind following is that you can support creators on the network that you trust, without having to view every single piece of their content.
Line 40: Line 32:
 We may have an opt-in setting to allow you to become a "relay" - this allows data outside of the criteria to be relayed via your node but not stored. That will be disabled by default so it is entirely an opt-in feature. Relay nodes will help the network function more smoothly, so anyone that does it will be helping content to flow more easily. The whole thing is going to be a bit of an experiment. But I think we've covered enough of the basics for it to succeed, after a few rounds of fixes of course. We may have an opt-in setting to allow you to become a "relay" - this allows data outside of the criteria to be relayed via your node but not stored. That will be disabled by default so it is entirely an opt-in feature. Relay nodes will help the network function more smoothly, so anyone that does it will be helping content to flow more easily. The whole thing is going to be a bit of an experiment. But I think we've covered enough of the basics for it to succeed, after a few rounds of fixes of course.
  
-===== Notes From CalDescent 11/19/21 =====+===== Developer Notes 10/26/21 =====
  
-The idea of a separate "data nodewas a possibility in the early stages of development, but we have now reached the stage where it's all merged into one. So every node will be "data enabled", they just won't participate in the hosting unless they view or follow. You won't have to run separate node for data. And it won't matter if you're a minter or notor whether you're running full node or top-only node. It works in all scenarios. The storage location for the data is configurable so you can host the data on a separate drive if you want to.+The files are "groupedby the transaction that they are contained  in. So in the case of website, you would have to download all files relating to transaction (and potentially past transactions) in order to access any file contained within itFor example:
  
-In v3.0, there will be no rewards for hosting. That part is going to require a lot of thought and discussion, as it is very difficult to prove that a node is hosting a certain amount of dataStorj does it using audits, but these are centralized and no good for QortalIt may be possible to do audits in a decentralized way, but we can look into that later.  +A user wants to view an image at /site/QortalDemo/wp-content/uploads/2021/02/qortal-logo.png (real image is here: http://node1.qortal.uk:12393/site/QortalDemo/wp-content/uploads/2021/02/qortal-logo.png)
-I expect we will ultimately have a "data market" where people can pay QORT to have their content hosted on more nodes (and node owners would essentially sell their space at the price they chooseSimilar to how Sia does it).+
  
-In terms of allocating block rewards for data hosts, this is only possible if we can create a publicly verifiable association between a node and a minting key, and also a publicly verifiable way to prove that a node is hosting the data it claims to be. At that point I guess it's up to the community to decide on whether modifying block reward distribution to include data hosts is the right thing to do or not. I doubt it's worth discussing in too much depth yet as we are a long way away from being able to implement anything like that. These concepts will take a lot of work to create solutions for. On the plus sideany progress we make with this can also be applied to reduce the gaming of the sponsorship system+In order to do thisthey need to build the QortalDemo website resource.
  
-There are no limits to the file formats that can be hosted. The core does restrict the types of data that can be used in certain cases - e.g. listing metadata has to be in a certain format, thumbnails have size limits, playlists will require a certain data structure to be adhered to, etc etc. But in terms of just hosting a file to be sharedthere are no limits other then the total file size limit which is several hundred megabytes. Exact limit TBC+The build system in the node gets notified that they have requested the file, and it goes and looks up arbitrary transactions for name QortalDemo with the service WEBSITE.
  
-We have to limit the total file size as otherwise the transaction size gets too bigThis is because we store all the chunk hashes in the on chain transactionIn the future if we increase the size of each chunkwe can increase the total file size limit by the same proportion +It may find just a recent PUT transaction, which contains the entire site, or it may find a PUT followed by multiple PATCH transactions. 
-(orwe can fit more data transactions in each block if we increase chunk sizes and keep the total file size limit the same).+ 
 +Once it knows what layers are needed, it puts out requests to the network to retrieve the encrypted data files associated with these layersFor each file it checks against the hashes on chain to ensure they are correct. 
 + 
 +Once it has all the files, it builds the website resource by decrypting+extracting each layerapplying the patches, and validating the hashes of the end result to ensure they match what was uploaded.
  
-===== Notes From CalDescent 8/18/21 =====+So the first time a website is accessed, it can take some time to build it depending on the complexity of the site. During this time a loading screen is shown which we can brand with the Qortal logo. Once built, the site is cached on the node so that subsequent accesses are instant (as seen in the QortalDemo and QortalWikiDemo). 
 + 
 +From that point onwards, the node will be hosting a copy of all the files so that other nodes can request them and repeat the above process. They could optionally delete the files for some of the layers or even just a few chunks for a single layer, and they would still be helping the network by serving fragments of the complete site, in the same way it works with torrents. 
 + 
 +I’m planning on having a “storage policy” setting with various different possible values such as following only viewed only following and viewed / all / none. Then we can default to following and viewed, and people have the option of increasing or reducing the scope of their storage. I’ll be working on this part in the next few weeks.
  
-For anyone interested in checking out the progress, you can do so here: https://github.com/CalDescent14/qortal-data/commits/master+===== Developer Notes 8/18/21 =====
  
-You will see that we already have a fully operational decentralized storage system.+We already have a fully operational decentralized storage system.
  
 The most recent feature introduces layering - similar to Docker - so that on-chain updates can be published to existing sites/resources without having to upload the entire file structure again. We can also now access sites via registered names (will demo this shortly). The most recent feature introduces layering - similar to Docker - so that on-chain updates can be published to existing sites/resources without having to upload the entire file structure again. We can also now access sites via registered names (will demo this shortly).
qdn_updates.1658396060.txt.gz · Last modified: 07/21/2022 05:34 by gfactor