stebalien changed the topic of #ipfs to: Heads Up: To talk, you need to register your nick! Announcements: go-ipfs 0.4.18 and js-ipfs 0.33 are out! Get them from dist.ipfs.io and npm respectively! | Also: #libp2p #ipfs-cluster #filecoin #ipfs-dev | IPFS, the InterPlanetary FileSystem: https://github.com/ipfs/ipfs | Logs: https://view.matrix.org/room/!yhqiEdqNjyPbxtUjzm:matrix.org/ | Forums: https://discuss.ipfs.io | Code of Con
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
Steverman has quit [Ping timeout: 272 seconds]
cheet has joined #ipfs
cheet has joined #ipfs
cheet has joined #ipfs
cheet has joined #ipfs
cheet has joined #ipfs
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
chiui has quit [Ping timeout: 240 seconds]
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
07IAAZFQO is now known as iczero
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
randomfromdc has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
sammacbeth has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
<postables[m]> swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> *edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]> *edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]> *edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]> *edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]> *edit:* ~~swedneck: turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe~~ -> swedneck:swedneck.xyz : turn on debugging for your daemon, run the ipfs add command with debugging. What issues are you experiencing? I've had isuses in the past with
<postables[m]> adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<postables[m]> adding and pinning a file at the same time, but that was fixed in release 0.4.18 i believe
<Swedneck> it's just really really really painfully slow
<Swedneck> it's just really really really painfully slow
<Swedneck> it's just really really really painfully slow
<Swedneck> it's just really really really painfully slow
<Swedneck> it's just really really really painfully slow
<Swedneck> over 10h to add 60 gigs of data
<Swedneck> over 10h to add 60 gigs of data
<Swedneck> over 10h to add 60 gigs of data
<Swedneck> over 10h to add 60 gigs of data
<Swedneck> over 10h to add 60 gigs of data
<Swedneck> (which takes like, 10 min on my desktop)
<Swedneck> (which takes like, 10 min on my desktop)
<Swedneck> (which takes like, 10 min on my desktop)
<Swedneck> (which takes like, 10 min on my desktop)
<Swedneck> (which takes like, 10 min on my desktop)
<Swedneck> how do i turn on debugging?
<Swedneck> how do i turn on debugging?
<Swedneck> how do i turn on debugging?
<Swedneck> how do i turn on debugging?
<Swedneck> how do i turn on debugging?
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
alexgr has quit [Ping timeout: 246 seconds]
<postables[m]> `ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]> `ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]> `ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]> `ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]> `ipfs -D <your-command>` so `ipfs -D daemon` or `ipfs -D add`
<postables[m]> I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]> I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]> I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]> I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<postables[m]> I tkae it you're not doing this test on a desktop? where are you testing, what are the sepcs of th emachine you're testing this on, what are the sepcs of your desktop
<Swedneck> it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck> it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck> it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck> it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck> it's on a desktop i use as a server, it's an AMD A8-6500 APU (4) @ 3.5GHz, 20+ GB of ram, and a 2TB hard drive
<Swedneck> do i need to let it run `ìpfs add` to completion?
<Swedneck> do i need to let it run `ìpfs add` to completion?
<Swedneck> do i need to let it run `ìpfs add` to completion?
<Swedneck> do i need to let it run `ìpfs add` to completion?
<Swedneck> do i need to let it run `ìpfs add` to completion?
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon1 has quit [Ping timeout: 250 seconds]
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
Mateon3 is now known as Mateon1
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
skybeast has quit [Quit: Page closed]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
q6AA4FD has quit [Ping timeout: 246 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
randomfromdc has quit [Ping timeout: 256 seconds]
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
cwahlers_ has quit [Ping timeout: 252 seconds]
<postables[m]> I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]> I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]> I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]> I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<postables[m]> I remember we talked about hash on read and bloom filters awhile ago. Do you have hash on read enabled on the server? IF so, the CPU is probs your bottleneck
<Swedneck> i do not
<Swedneck> i do not
<Swedneck> i do not
<Swedneck> i do not
<Swedneck> i do not
<postables[m]> hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]> hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]> hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]> hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<postables[m]> hmm 🤔 what does `htop` show your core utilization is like when you're running the add
<Swedneck> fairly high, but not above 90% on any core
<Swedneck> fairly high, but not above 90% on any core
<Swedneck> fairly high, but not above 90% on any core
<Swedneck> fairly high, but not above 90% on any core
<Swedneck> fairly high, but not above 90% on any core
<postables[m]> hmm
<postables[m]> hmm
<postables[m]> hmm
<postables[m]> hmm
<postables[m]> hmm
<postables[m]> can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]> can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]> can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]> can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]> can you try running a pin on your server for the hash if it can reach your node running on your desktop?
<postables[m]> *edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]> *edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]> *edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]> *edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<postables[m]> *edit:* ~~can you try running a pin on your server for the hash if it can reach your node running on your desktop?~~ -> can you try running a pin on your server for the hash if it can reach your node running on your desktop? could help isolate whether its a disk level issue perhaps
<Swedneck> sure, they're on the same LAN btw
<Swedneck> sure, they're on the same LAN btw
<Swedneck> sure, they're on the same LAN btw
<Swedneck> sure, they're on the same LAN btw
<Swedneck> sure, they're on the same LAN btw
<postables[m]> cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]> cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]> cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]> cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<postables[m]> cool that should make for some easy debugging then. What're the specs like on your desktop which you aren't having the issue on?
<Swedneck> ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck> ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck> ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck> ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck> ryzen 5 1600, 16GB ram, OS is on an ssd but the repo is on HDD (and i'm using --nocopy)
<Swedneck> oh boy right, another issue
<Swedneck> oh boy right, another issue
<Swedneck> oh boy right, another issue
<Swedneck> oh boy right, another issue
<Swedneck> oh boy right, another issue
<Swedneck> `Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck> `Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck> `Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck> `Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck> `Error: pin: open /home/ipfs/fdroid-mirror/repo/a2dp.Vol_121.apk.asc: no such file or directory`
<Swedneck> i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck> i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck> i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck> i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
<Swedneck> i had tried to add the directory before using --nocopy on the server, and now i can't seem to unpin that file
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
Fabricio20 has joined #ipfs
<postables[m]> hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> *edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]> *edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]> *edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]> *edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]> *edit:* ~~hmm could possibly be a CPU issue. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo~~ -> hmm could possibly be a CPU issue but experimental feature bug sounds more likely. Although if this is happening repeatedly with your usage of `--nocopy` sounds like you're running
<postables[m]> into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> into a bug with an experimental feature. Might be worth opening a bug report on the ipfs repo
<postables[m]> Do you have this issue not using `--nocopy`
<postables[m]> Do you have this issue not using `--nocopy`
<postables[m]> Do you have this issue not using `--nocopy`
<postables[m]> Do you have this issue not using `--nocopy`
<postables[m]> Do you have this issue not using `--nocopy`
<Swedneck> well the issue was caused by using it, i think
<Swedneck> well the issue was caused by using it, i think
<Swedneck> well the issue was caused by using it, i think
<Swedneck> well the issue was caused by using it, i think
<Swedneck> well the issue was caused by using it, i think
<Swedneck> i'm not using --nocopy anymore
<Swedneck> i'm not using --nocopy anymore
<Swedneck> i'm not using --nocopy anymore
<Swedneck> i'm not using --nocopy anymore
<Swedneck> i'm not using --nocopy anymore
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
mischat has quit [Remote host closed the connection]
<postables[m]> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> Are you using flatfs?
<postables[m]> Are you using flatfs?
<postables[m]> Are you using flatfs?
<postables[m]> Are you using flatfs?
<postables[m]> Are you using flatfs?
<postables[m]> *edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> *edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> *edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> *edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> *edit:* ~~strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> Are you using flatfs?~~ -> strange, might want to open a bug report for the `no such file directory` issue. If possible, I would try using a fresh repo that hasn't been used with experimental features. May also want to try `ipfs repo verify`. I would also double check that on the new node, you're repo max GB limit isn't set so that adding the 60GB file would reach the limit.
<postables[m]> Are you using flatfs for your repo?
<postables[m]> Are you using flatfs for your repo?
<postables[m]> Are you using flatfs for your repo?
<postables[m]> Are you using flatfs for your repo?
<postables[m]> Are you using flatfs for your repo?
<Swedneck> the server is using badger
<Swedneck> the server is using badger
<Swedneck> the server is using badger
<Swedneck> the server is using badger
<Swedneck> the server is using badger
<Swedneck> i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck> i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck> i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck> i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck> i'm gonna try just wiping the directory i'm adding from and re-downloading the contents first
<Swedneck> i could just have fucked up the files somehow
<Swedneck> i could just have fucked up the files somehow
<Swedneck> i could just have fucked up the files somehow
<Swedneck> i could just have fucked up the files somehow
<Swedneck> i could just have fucked up the files somehow
<Swedneck> there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck> there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck> there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck> there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck> there were definitely too many files at least, it should've been 48 gigs lol
<Swedneck> hmm, my server is really slow at downloading stuff as well..
<Swedneck> hmm, my server is really slow at downloading stuff as well..
<Swedneck> hmm, my server is really slow at downloading stuff as well..
<Swedneck> hmm, my server is really slow at downloading stuff as well..
<Swedneck> hmm, my server is really slow at downloading stuff as well..
<postables[m]> re-downloading from IPFS?
<postables[m]> re-downloading from IPFS?
<postables[m]> re-downloading from IPFS?
<postables[m]> re-downloading from IPFS?
<postables[m]> re-downloading from IPFS?
<Swedneck> no, rsync
<Swedneck> no, rsync
<Swedneck> no, rsync
<Swedneck> no, rsync
<Swedneck> no, rsync
<Swedneck> cpu usage is around 50% for all cores still
<Swedneck> cpu usage is around 50% for all cores still
<Swedneck> cpu usage is around 50% for all cores still
<Swedneck> cpu usage is around 50% for all cores still
<Swedneck> cpu usage is around 50% for all cores still
<Swedneck> 5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck> 5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck> 5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck> 5200 rpm drives aren't that much slower than 7200 rpm ones, right?
<Swedneck> 5200 rpm drives aren't that much slower than 7200 rpm ones, right?
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
q6AA4FD has joined #ipfs
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
erratic has quit [Excess Flood]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
}ls{ has quit [Quit: real life interrupt]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
clemo has quit [Ping timeout: 250 seconds]
<postables[m]> for this kind of stuff, I believe they are
<postables[m]> for this kind of stuff, I believe they are
<postables[m]> for this kind of stuff, I believe they are
<postables[m]> for this kind of stuff, I believe they are
<postables[m]> for this kind of stuff, I believe they are
<Swedneck> hmm
<Swedneck> hmm
<Swedneck> hmm
<Swedneck> hmm
<Swedneck> hmm
<Swedneck> think a hybrid drive would be worth it for this?
<Swedneck> think a hybrid drive would be worth it for this?
<Swedneck> think a hybrid drive would be worth it for this?
<Swedneck> think a hybrid drive would be worth it for this?
<Swedneck> think a hybrid drive would be worth it for this?
<postables[m]> somewhat, you should be fine with just a 7.2K RPM one
<postables[m]> somewhat, you should be fine with just a 7.2K RPM one
<postables[m]> somewhat, you should be fine with just a 7.2K RPM one
<postables[m]> somewhat, you should be fine with just a 7.2K RPM one
<postables[m]> somewhat, you should be fine with just a 7.2K RPM one
<postables[m]> you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]> you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]> you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]> you can use 10KRPM if you need a little more speed without breaking the bank
<postables[m]> you can use 10KRPM if you need a little more speed without breaking the bank
<Swedneck> well 1TB hybrid is 60 bucks
<Swedneck> well 1TB hybrid is 60 bucks
<Swedneck> well 1TB hybrid is 60 bucks
<Swedneck> well 1TB hybrid is 60 bucks
<Swedneck> well 1TB hybrid is 60 bucks
<Swedneck> and no 10k available :(
<Swedneck> and no 10k available :(
<Swedneck> and no 10k available :(
<Swedneck> and no 10k available :(
<Swedneck> and no 10k available :(
<postables[m]> if 1TB is 7.2K rpm u should be fine
<postables[m]> if 1TB is 7.2K rpm u should be fine
<postables[m]> if 1TB is 7.2K rpm u should be fine
<postables[m]> if 1TB is 7.2K rpm u should be fine
<postables[m]> if 1TB is 7.2K rpm u should be fine
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
<Swedneck> i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck> i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck> i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck> i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
<Swedneck> i'm actually not sure it's a 5200 rpm drive, but i guess there's no other explanation?
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
kapil____ has joined #ipfs
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
brewski[m] is now known as brewski0244[m]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has quit [Ping timeout: 272 seconds]
user_51 has joined #ipfs
user_51 has joined #ipfs
user_51 has joined #ipfs
user_51 has joined #ipfs
user_51 has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
cwahlers has quit [Ping timeout: 244 seconds]
<DarkDrgn2k[m]> HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]> HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]> HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]> HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]> HDD (not ssd) out-preform on sequencial writes
<DarkDrgn2k[m]> so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]> so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]> so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]> so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]> so if you are writing one large file a 7.2k rpm drive will be quite a bit faster then the 5.2 one
<DarkDrgn2k[m]> (they will even outpreform ssd usually)
<DarkDrgn2k[m]> (they will even outpreform ssd usually)
<DarkDrgn2k[m]> (they will even outpreform ssd usually)
<DarkDrgn2k[m]> (they will even outpreform ssd usually)
<DarkDrgn2k[m]> (they will even outpreform ssd usually)
<DarkDrgn2k[m]> if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]> if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]> if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]> if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
<DarkDrgn2k[m]> if its random read/write i donno how much differnt it would be...ive only used those for long term storage type setups
jpf137 has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
jpf137 has quit [Ping timeout: 240 seconds]
jpf137 has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
jpf137 has quit [Ping timeout: 240 seconds]
jpf137 has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
thomasanderson has quit [Ping timeout: 272 seconds]
<Swedneck> Well it's mostly a bunch of apks
<Swedneck> Well it's mostly a bunch of apks
<Swedneck> Well it's mostly a bunch of apks
<Swedneck> Well it's mostly a bunch of apks
<Swedneck> Well it's mostly a bunch of apks
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
purisame has quit [Ping timeout: 244 seconds]
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has quit [Ping timeout: 246 seconds]
zzach has joined #ipfs
zzach has joined #ipfs
zzach has joined #ipfs
zzach has joined #ipfs
zzach has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus_ has joined #ipfs
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus has quit [Ping timeout: 244 seconds]
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
lassulus_ is now known as lassulus
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
dimitarvp has quit [Quit: Bye]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
spinza has quit [Quit: Coyote finally caught up with me...]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
mauz555 has quit [Remote host closed the connection]
spinza has joined #ipfs
spinza has joined #ipfs
spinza has joined #ipfs
spinza has joined #ipfs
spinza has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
thomasanderson has quit [Ping timeout: 246 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has quit [Ping timeout: 245 seconds]
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
toxync01 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
cwahlers_ has quit [Ping timeout: 240 seconds]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
mauz555 has quit [Ping timeout: 244 seconds]
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasan_ has joined #ipfs
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
thomasanderson has quit [Ping timeout: 268 seconds]
e0f has joined #ipfs
e0f has joined #ipfs
e0f has joined #ipfs
e0f has joined #ipfs
e0f has joined #ipfs
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasan_ has quit [Remote host closed the connection]
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
mauz555 has joined #ipfs
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
thomasanderson has quit [Remote host closed the connection]
BeerHall has joined #ipfs
BeerHall has joined #ipfs
BeerHall has joined #ipfs
BeerHall has joined #ipfs
BeerHall has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
_whitelogger____ has joined #ipfs
_whitelogger has joined #ipfs
_whitelogger has joined #ipfs
_whitelogger_ has joined #ipfs
_whitelogger_ has joined #ipfs
_whitelogger_ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger__ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
_whitelogger___ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
cwahlers has quit [Ping timeout: 272 seconds]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
xcm has joined #ipfs
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
fireglow has left #ipfs ["puf"]
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
James[m]5 has joined #ipfs
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
mauz555 has quit []
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has quit [Quit: Leaving.]
vyzo has joined #ipfs
vyzo has joined #ipfs
vyzo has joined #ipfs
vyzo has joined #ipfs
vyzo has joined #ipfs
_whitelogger has joined #ipfs
kapil____ has joined #ipfs
aarshkshah1992 has joined #ipfs
<xialvjun[m]> Is there any document about IPFS gateway api ?
James[m]5 has left #ipfs ["User left"]
aarshkshah1992 has quit [Remote host closed the connection]
jamesaxl has quit [Quit: WeeChat 2.3]
aarshkshah1992 has joined #ipfs
nonono has joined #ipfs
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 268 seconds]
Ai9zO5AP has joined #ipfs
}ls{ has joined #ipfs
<xialvjun[m]> I mean the gateway api, not the http api. http://localhost:8080/ipfs/<path> and it can get many type of things, I want to know it
<xialvjun[m]> aarshkshah1992:
clemo has joined #ipfs
alexgr has joined #ipfs
spinza has quit [Quit: Coyote finally caught up with me...]
kapil____ has quit [Quit: Connection closed for inactivity]
spinza has joined #ipfs
rendar has joined #ipfs
maxzor has joined #ipfs
chiui has joined #ipfs
grawity has joined #ipfs
<grawity> hey, where can I find a table of multihash prefixes? like, what does '12D...' mean, as opposed to the usual 'QmN...'
BeerHall has quit [Ping timeout: 244 seconds]
BeerHall has joined #ipfs
<olizilla> grawity: the raw table lives here https://github.com/multiformats/multicodec/blob/master/table.csv
<olizilla> but you might find this helpful too https://github.com/ipld/cid#how-does-it-work
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 244 seconds]
gmoro has joined #ipfs
maxzor has quit [Ping timeout: 250 seconds]
thomasanderson has joined #ipfs
thomasanderson has quit [Ping timeout: 246 seconds]
_whitelogger has joined #ipfs
jpf137 has joined #ipfs
jpf137 has quit [Quit: Leaving]
jesse22 has quit [Ping timeout: 250 seconds]
malaclyps has quit [Read error: Connection reset by peer]
malaclyps has joined #ipfs
yarb44 has joined #ipfs
aarshkshah1992 has quit [Remote host closed the connection]
aarshkshah1992 has joined #ipfs
aarshkshah1992 has quit [Ping timeout: 244 seconds]
dolphy has quit [Ping timeout: 252 seconds]
xdecimal has joined #ipfs
cwahlers_ has quit [Ping timeout: 250 seconds]
cwahlers has joined #ipfs
purisame has joined #ipfs
xdecimal has quit [Quit: -a- IRC for Android 2.1.44]
kapil____ has joined #ipfs
maxzor has joined #ipfs
clemo has quit [Remote host closed the connection]
aerth has quit [Remote host closed the connection]
aerth has joined #ipfs
_whitelogger has joined #ipfs
aarshkshah1992 has joined #ipfs
aarshkshah1992 has quit [Ping timeout: 268 seconds]
Taoki has joined #ipfs
ikari` has joined #ipfs
<ikari`> o hai
jesse22 has joined #ipfs
WhizzWr has quit [Quit: Bye!]
WhizzWr has joined #ipfs
nivekuil has quit [Ping timeout: 240 seconds]
apiarian has quit [Quit: zoom!]
Caterpillar has joined #ipfs
apiarian has joined #ipfs
BeerHall has quit [Ping timeout: 240 seconds]
nivekuil has joined #ipfs
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 272 seconds]
abbiya has joined #ipfs
<abbiya> hello
<abbiya> I did not understand anything that is written in the above project
<abbiya> i am looking for ways to discover webrtc peers based on topics
<abbiya> can anyone please help understand my problem and lead to answers>?
chiui has quit [Ping timeout: 250 seconds]
shoku has quit [Quit: The Lounge - https://thelounge.github.io]
shoku has joined #ipfs
Ai9zO5AP has quit [Quit: WeeChat 2.3]
Ai9zO5AP has joined #ipfs
Ai9zO5AP has quit [Client Quit]
Ai9zO5AP has joined #ipfs
abbiya has left #ipfs [#ipfs]
shoku has quit [Quit: The Lounge - https://thelounge.github.io]
shoku has joined #ipfs
TimMc has quit [Changing host]
TimMc has joined #ipfs
nonono has quit [Ping timeout: 246 seconds]
Adbray has quit [Remote host closed the connection]
maxzor has quit [Remote host closed the connection]
cwahlers has joined #ipfs
cwahlers_ has quit [Ping timeout: 250 seconds]
chiui has joined #ipfs
nonono has joined #ipfs
seba- has joined #ipfs
moonman_ has joined #ipfs
seba- has quit [Changing host]
seba- has joined #ipfs
<seba-> hm
<seba-> are there any charts
<seba-> on the growth of IPFS?
moonman_ has quit [Client Quit]
moonman_ has joined #ipfs
Adbray has joined #ipfs
Adbray has quit [Max SendQ exceeded]
Adbray has joined #ipfs
Orkun[m] has joined #ipfs
erratic has joined #ipfs
<ikari`> from other stupid questions: should my peer count be always 0?:P
zloba[m] has joined #ipfs
dimitarvp has joined #ipfs
brianhoffman has quit [Read error: Connection reset by peer]
p[m]1 has joined #ipfs
brianhoffman has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
erratic has quit [Quit: this computer has gone to sleep...]
ikari` has quit [Quit: This computer has gone to sleep]
vivus has joined #ipfs
aarshkshah1992 has joined #ipfs
aarshkshah1992 has quit [Ping timeout: 240 seconds]
roygbiv has joined #ipfs
jesse22 has quit [Read error: Connection timed out]
vivus has quit [Remote host closed the connection]
chiui has quit [Ping timeout: 245 seconds]
chiui has joined #ipfs
chiui has quit [Remote host closed the connection]
jesse22 has joined #ipfs
thomasanderson has joined #ipfs
Adbray has quit [Read error: Connection reset by peer]
moonman_ has quit [Remote host closed the connection]
Adbray has joined #ipfs
thomasanderson has quit [Remote host closed the connection]
Caterpillar has quit [Ping timeout: 246 seconds]
cwahlers has quit [Ping timeout: 268 seconds]
cwahlers has joined #ipfs
seba- has quit [Ping timeout: 272 seconds]
ygrek has joined #ipfs
eingenito[m] has joined #ipfs
NukeManDan has joined #ipfs
moonman_ has joined #ipfs
moonman_ has quit [Client Quit]
rendar has quit []
kapil____ has quit [Quit: Connection closed for inactivity]
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has quit [Ping timeout: 246 seconds]
ikari` has joined #ipfs
lidel` has joined #ipfs
lidel has quit [Ping timeout: 268 seconds]
lidel` is now known as lidel
creationix has left #ipfs [#ipfs]
spinza has quit [Quit: Coyote finally caught up with me...]
alexgr has quit [Ping timeout: 246 seconds]
alexgr has joined #ipfs
spinza has joined #ipfs
mischat has joined #ipfs
ElChupacabra has joined #ipfs
thomasanderson has joined #ipfs
thomasanderson has quit [Ping timeout: 250 seconds]
roygbiv has quit [Quit: ™]
vivus has joined #ipfs
eluldan[m] has joined #ipfs
eluldan[m] has left #ipfs [#ipfs]
NukeManDan has quit [Quit: Page closed]
vyzo has quit [Quit: Leaving.]
vyzo has joined #ipfs
hurikhan77 has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
hurikhan77 has joined #ipfs
<postables[m]> ikari`: if your peercount is always 0 then it likely means you have a firewall issue
kaminishi has joined #ipfs
cwahlers has quit [Ping timeout: 250 seconds]
cwahlers has joined #ipfs
hurikhan77 has quit [Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
ElChupacabra has quit [Ping timeout: 268 seconds]
hurikhan77 has joined #ipfs
ikari` has quit [Quit: This computer has gone to sleep]
kesenai has quit [Ping timeout: 252 seconds]
kesenai has joined #ipfs
kesenai has quit [Remote host closed the connection]
kesenai has joined #ipfs
ToxicFrog has quit [Quit: WeeChat 2.3]
ToxicFrog has joined #ipfs
thomasanderson has joined #ipfs
ikari` has joined #ipfs
thomasanderson has quit [Remote host closed the connection]
MDude has quit [Ping timeout: 268 seconds]
Aranjedeath has joined #ipfs
aarshkshah1992 has joined #ipfs
aarshkshah1992 has quit [Ping timeout: 246 seconds]
MDude has joined #ipfs
mischat has quit [Remote host closed the connection]
spinza has quit [Quit: Coyote finally caught up with me...]
dolphy has joined #ipfs
thomasanderson has joined #ipfs
xlued has joined #ipfs
spinza has joined #ipfs
thomasanderson has quit [Remote host closed the connection]
thomasan_ has joined #ipfs
kaminishi has quit [Remote host closed the connection]
mischat has joined #ipfs
oco109 has joined #ipfs
xcm has quit [Remote host closed the connection]
xcm has joined #ipfs
Mateon3 has joined #ipfs
Mateon1 has quit [Ping timeout: 244 seconds]
Mateon3 is now known as Mateon1
cwahlers_ has joined #ipfs
cwahlers has quit [Ping timeout: 250 seconds]
M-Sonata has quit [Changing host]
M-Sonata has joined #ipfs
M-Sonata has joined #ipfs
thomasan_ has quit [Remote host closed the connection]
yarb44 has quit [Remote host closed the connection]
hoodo has joined #ipfs
hoodo has quit [Client Quit]
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has joined #ipfs
daMaestro has joined #ipfs
<Kolonka[m]> very interesting to see china on there
<Swedneck> wtf hong kong
thomasanderson has joined #ipfs
sammacbeth has quit [Quit: Ping timeout (120 seconds)]
sammacbeth has joined #ipfs
kesenai has quit [Remote host closed the connection]
nikoladjokic[m] has joined #ipfs
The_8472 has quit [Ping timeout: 252 seconds]
thomasanderson has quit [Remote host closed the connection]
Steverman has quit [Quit: WeeChat 2.3]
The_8472 has joined #ipfs