infinity0 has quit [Remote host closed the connection]
<sonata>
i can hack it with e.g. `{"Links":[{"Name":".nonce","Hash":"QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH","Size":1234}],"Data":"\u0008\u0001"}` but this causes .nonce to show up in the directory listing, which is Unaesthetic
skeuomorf has joined #ipfs
droman has quit []
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
<sonata>
maybe i should be looking at ipld instead of old dag
<Mateon1>
Yeah, that's what I mean. ipfs dag put, not ipfs block add or ipfs object put
infinity0 has joined #ipfs
infinity0 has quit [Remote host closed the connection]
<Mateon1>
You are trying to handcraft a unixfs object with the above command. unixfs is protobuf, so it doesn't like additional attributes
<Mateon1>
With `ipfs dag`, you can add an object like {"_nonce": "\u0000\u0000...\u0000\u0000", "actualData": {"/": "QmTargetHash..."}}
infinity0 has joined #ipfs
<Mateon1>
You can then resolve to QmTarget.. through that object with QmDAG/actualData
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
* sonata
nods
infinity0 has quit [Remote host closed the connection]
anewuser has quit [Ping timeout: 246 seconds]
<sonata>
ok, now... `ipfs dag get QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn | ipfs dag put - ; echo` => zdpuAxpN1LaxvwsNSEwmy5ECX32MXjJ522BFUwEN1dVkZytUC
<sonata>
what can i do with this strange new hash, the rest of the ipfs commands outside of ipfs dag seem to choke on it
Akaibu has quit [Quit: Connection closed for inactivity]
<daviddias>
!botsnack
<sprint-helper>
om nom nom
matoro has joined #ipfs
jaboja has joined #ipfs
acarrico has joined #ipfs
dignifiedquire has quit [Quit: Connection closed for inactivity]
<sonata>
I'd just like to say how much I appreciate multiaddr
<sonata>
there are design decisions i'm not thrilled by but multiaddr is not one of them
jsgrant__ has joined #ipfs
jsgrant_ has quit [Ping timeout: 260 seconds]
ralphthe1inja has joined #ipfs
ralphthe1inja has quit [Client Quit]
ralphtheninja has quit [Ping timeout: 240 seconds]
aaa| has quit [Remote host closed the connection]
acarrico has quit [Read error: Connection reset by peer]
ralphtheninja has joined #ipfs
zuck05 has joined #ipfs
rphlx has quit [Quit: Leaving]
OstlerDev has joined #ipfs
zuck05 has quit [Remote host closed the connection]
nullobject has quit [Quit: zzz]
zuck05 has joined #ipfs
appa has quit [Ping timeout: 240 seconds]
movaex has quit [Ping timeout: 246 seconds]
appa has joined #ipfs
robattila256 has quit [Quit: WeeChat 1.8]
jaboja has quit [Ping timeout: 240 seconds]
drathir has quit [Ping timeout: 260 seconds]
robattila256 has joined #ipfs
drathir has joined #ipfs
chris613 has left #ipfs [#ipfs]
afdudley[m]1 has left #ipfs ["User left"]
ipfsrocks has joined #ipfs
Guest71719 has quit [Quit: Alt-F4 at console]
shizy has joined #ipfs
m3lt_ has joined #ipfs
m3lt has quit [Ping timeout: 272 seconds]
dimitarvp has quit [Quit: Bye]
infinity0 has joined #ipfs
gully-foyle has joined #ipfs
gully-foyle has quit [Remote host closed the connection]
_whitelogger has joined #ipfs
talonz has joined #ipfs
shizy has quit [Ping timeout: 260 seconds]
talonz has quit [Remote host closed the connection]
talonz has joined #ipfs
onabreak has quit [Ping timeout: 260 seconds]
<charlienyc[m]>
Anyone know how the converted the kiwix Zim files to ipfs files? The pinning will take too long and a need a network solution for Wikipedia in Syria. Downloading just the English version will take 12 days via ipfs but only 1 for Zim.
owlet has quit [Ping timeout: 240 seconds]
_whitelogger has joined #ipfs
onabreak has joined #ipfs
Foxcool has joined #ipfs
The_8472 has quit [Ping timeout: 264 seconds]
The_8472 has joined #ipfs
ccsdss has joined #ipfs
ccsdss has left #ipfs [#ipfs]
sirdancealot has joined #ipfs
espadrine has joined #ipfs
<Kubuxu>
charlienyc[m]: conversion will take much longer
<Kubuxu>
it took about 6h on a really beefy machine with NVME drives in RAID0
mildred3 has quit [Ping timeout: 240 seconds]
jsrocks has quit [Quit: Lost terminal]
ipfsrocks has quit [Quit: Lost terminal]
m10r has joined #ipfs
<charlienyc[m]>
Than 12 days?!
<charlienyc[m]>
Can I get a torrent of the converted files then?
<charlienyc[m]>
In addition to the network overhead of ipfs, I need to limit the bandwidth usage so other people can use the network. I could find documentation for doing that in ipfs.
dignifiedquire has joined #ipfs
<charlienyc[m]>
I know that Torrenting the files uncompressed will break my torrent client, but a compressed one should work. I've definitely moved 1TB zipped files
<charlienyc[m]>
Via torrent, I mean
<charlienyc[m]>
Kubuxu: for my own curiosity, is that conversion documented somewhere? I could convert on my cloud set-up and do the torrent myself. You all have done plenty so far.
<Kubuxu>
it is 17milion files in one directory it will break your filesystem
<Kubuxu>
yes in the blogpost and the repo I sent you
<Kubuxu>
are you using RPi for it or something else?
<Kubuxu>
Assuming IOPS of normal HDD extraction of the dump will take about 5 days but it is more of a under estimate than overestimate.
maxlath has joined #ipfs
Caterpillar has joined #ipfs
igorline has joined #ipfs
robattila256 has quit [Ping timeout: 245 seconds]
maxlath has quit [Ping timeout: 255 seconds]
sonata has quit [Read error: Connection reset by peer]
Guest59175 has joined #ipfs
pat36 has joined #ipfs
<charlienyc[m]>
This will go on a decent windows 7 machine to act as a server in a computer lab.
<charlienyc[m]>
Sorry about the timing--had a meeting at the university that came up suddenly. Things happen very slowly here then all at once
<voker57>
needing a torrent to distribute files via ipfs is kinda ironic
Guest4265 has quit [Quit: Leaving]
<charlienyc[m]>
Well, ipfs is super bandwidth heavy
<charlienyc[m]>
And we're working on less than 1Mbps for 5-10 people
<voker57>
it's not really heavy, pinning just waits a lot for network due to poor design
<voker57>
there's a hack with running daemon with --routing=none and connecting to nodes manually for duration of big pin
Foxcool has quit [Ping timeout: 245 seconds]
<charlienyc[m]>
I ran a test when I pinned the Turkish wiki with that method. It used 10x the file size
<charlienyc[m]>
Also, I can't limit the bandwidth like in a torrent t client, so I'm stuck running it overnight instead of all day
dignifiedquire has quit [Quit: Connection closed for inactivity]
<charlienyc[m]>
Any guesses as to why the pin has stalled? I have a script running the swarm connect every 10s and there are no errors from the daemon
<horrified>
does simply running an ipfs "node" help the network? like, maybe for quicker, more robust routing?
Encrypt has joined #ipfs
<alextes>
horrified: I kinda had the same question but am still busy setting up the node :p
<alextes>
do first, ask questions later :p
mahloun has quit [Ping timeout: 245 seconds]
<owlet>
horrified, that does help because of bitswap. But from what I understand the best way to help is to seed files that you think other people will want.
chadoh has joined #ipfs
rendar has quit [Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!]
igorline has quit [Ping timeout: 240 seconds]
atrapado_ has joined #ipfs
jaboja has joined #ipfs
ipfsrocks has joined #ipfs
_shizy has quit [Quit: WeeChat 1.7.1]
igorline has joined #ipfs
maxlath has quit [Quit: maxlath]
gmoro has quit [Ping timeout: 240 seconds]
chadoh has quit [Remote host closed the connection]