<SchrodingersScat>
so, it's kinda working with acdcli
<SchrodingersScat>
obviously not what god intended, and it's constantly throwing errors saying, "Please kill me." but it's doing....something
acarrico has quit [Ping timeout: 240 seconds]
<SchrodingersScat>
oh, fiddlesticks, and then it stopped :/
<u808f>
huh, seems like fun.
anewuser has joined #ipfs
<u808f>
what are some of the errors that it's spitting out?
ygrek has quit [Remote host closed the connection]
nunofmn has joined #ipfs
<SchrodingersScat>
u808f: 20:12:52.930 ERROR bitswap: Error writing block to datastore: rename /home/anon/.ipfs/blocks/AW/put-142127119 /home/anon/.ipfs/blocks/AW/CIQG2EYNBMBH5PBGNZIP7LYWRV5VF3IAEUH4CPULQLP4TXK3I457AWY.data: bad address bitswap.go:322
<SchrodingersScat>
u808f: I'm thinking it may simply be that I haven't created enough directories for it to be efficient when placing the actual data, if that is when AW was created, etc.
<u808f>
i have no idea tbh
acarrico has joined #ipfs
nunofmn has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
kvda has joined #ipfs
<lgierth>
that looks wonky
<lgierth>
worth filing an issue for
<SchrodingersScat>
lgierth: it's oven a FUSE filesystem
<lgierth>
hah
<lgierth>
still ;)
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
<SchrodingersScat>
would be nice if there was an option to not have it rename the files, for these instances
walle303 has quit [Quit: ZNC 1.7.x-git-689-0a36695 - http://znc.in]
walle303 has joined #ipfs
skeuomorf has quit [Ping timeout: 240 seconds]
acarrico has quit [Ping timeout: 252 seconds]
<timthelion[m]>
Building software is soo hard :(
<timthelion[m]>
Can't we have a "lead-me-through-the-build-process.sh" which prompts me to install all the stuff I need to build go-ipfs. or a dockerfile that I can run with a volume to build go-ipfs and run the test suit without having to install all the deps?
acarrico has joined #ipfs
santamanno has quit [Read error: Connection reset by peer]
smuemd[m] has joined #ipfs
maxlath has joined #ipfs
jack has joined #ipfs
Foxcool has joined #ipfs
kvda has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
fireheron01[m] has joined #ipfs
rcat has quit [Quit: Lost terminal]
rcat has joined #ipfs
maxlath1 has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
maxlath1 is now known as maxlath
galois_dmz has quit [Remote host closed the connection]
galois_dmz has joined #ipfs
galois_dmz has quit [Remote host closed the connection]
<timthelion[m]>
sn0wmonster: do you want more python implemenations?
<sn0wmonster>
just curious if there was one where the folders weren't empty
<sn0wmonster>
There is literally nothing there but empty folders and it's been worked on at latest, 2 years ago :)
<sn0wmonster>
so I am wondering if i should put support into it or just support my own fork from Go
<sn0wmonster>
anyone else already working on something that i might have missed?
maxlath has quit [Ping timeout: 240 seconds]
acarrico has quit [Ping timeout: 240 seconds]
<lgierth>
timthelion[m]: there *is* a dockerfile in go-ipfs which does just that
<lgierth>
it's called Dockerfile ;)
<timthelion[m]>
lgierth: are you sure it does that? I read it, and what I understood from reading it, was that it runs docker.
<timthelion[m]>
It also COPY's the source code into the Docker image, which means that you have to build the image each time you make a change to the source code. I'm trying to develop go-ipfs.
<timthelion[m]>
s/runs docker/runs ipfs(
<SchrodingersScat>
if a smarter person than myself made amazonclouddrive support in ipfs then that would be neato, inserting this through the python takes ages and is buggy, there's gotta be a better way.
<SchrodingersScat>
i want to pin the world
d10r has quit [Ping timeout: 260 seconds]
<timthelion[m]>
SchrodingersScat: what does it even mean to pin the world?
<timthelion[m]>
Do you mean, like get a hash of the stuff you have on clouddrive?
<SchrodingersScat>
recursive pinning of /ipfs/
<timthelion[m]>
Like pin everything that's currently in the ipfs network?
<timthelion[m]>
Or at least that you've pinned yoruself?
anewuser has joined #ipfs
<SchrodingersScat>
timthelion[m]: i'm pinning everything and sending it to mars
<timthelion[m]>
If you want to send everything to marse, or some other high latency location, the best thing to do would be to put your ~/.ipfs/ directory in a tarbal and send it via a high latency tolerant protocol, such as [propaine powered usb stick cannon](https://en.wikipedia.org/wiki/Space_gun), or scp.
maxlath has joined #ipfs
<timthelion[m]>
s/marse/mars/
captain_morgan has quit [Remote host closed the connection]
<sn0wmonster>
question about the decentralized nature of ipfs, i'm still learning so this might be covered someplace i'm not aware of:
<sn0wmonster>
it seems to be advertising itself as a decentralized solution for hosting, but the website talks about having an abuse department for DCMA takedowns
<sn0wmonster>
that wouldn't be logical for a decentralized network
<sn0wmonster>
so i presume that's talking about the IPFS.io hosted servers only?
<sn0wmonster>
"All content published to public IPFS infrastructure is hosted at the sole discretion of the IPFS team."
<sn0wmonster>
that implies that all of these "terms and conditions" only apply to just the IPFS servers the IPFS team are hosting themselves,
<sn0wmonster>
and not applicable to any other servers people want to run for IPFS, i take it?
<SchrodingersScat>
sn0wmonster: yeah, i thought there was a blacklist file, i could have imagined that. But you'll notice if the royal you tries to watch the Dark Knight via the ipfs.io portal then it'll timeout/fail, but if you watch it from a local ipfs then it's just fine, etc.
<sn0wmonster>
fair enough
<sn0wmonster>
had me worried that IPFS was "just more of the same" for a moment there
<sn0wmonster>
so this is just for the personal protection of the crew, which i can totally appreciate
onabreak has quit [Ping timeout: 260 seconds]
<sn0wmonster>
to avoid this completely they might have decided not to put so much emphasis on them being the owners
<SchrodingersScat>
sn0wmonster: yeah, and any other hosters, if you get complaints then you would want a way to block certain things.
<sn0wmonster>
e.g. running it anonymously or just not claiming public ownership and advertising that
<sn0wmonster>
iagree
<SchrodingersScat>
heh, well, someone owns the domain
<sn0wmonster>
it's not the blocking that worries me, it's the abuse of blocking by nation states
<sn0wmonster>
but yes, i can see this is a cosmetic solution
<SchrodingersScat>
hmm, guess they'd have to block your connection? or learn to block hashes on some other networking level?
<sn0wmonster>
i don't see how if it's encrypted connections
<sn0wmonster>
you could even change one bit of the data and reupload and same thing
<SchrodingersScat>
i've considered offering hollywood a solution where they give me the dvd/bluray copies of movies and I hash them into ipfs to make sure they get blacklisted, that makes sense right?
<sn0wmonster>
anyway, i'm not interested in using IPFS for pirating,i'm interested in utilizing it for #Taskhive 's user's profile, portfolio and work sample images and for their uploaded screenshots and mock samples for work contracts, since bitmessage network has certain size constraints (something like 20kb~60kb maximum)
<SchrodingersScat>
#samplemocking
<sn0wmonster>
SchrodingersScat, lol
<sn0wmonster>
but if we port it to python and allow taskhive users to run as IPFS nodes, would we be able to have them only host Taskhive related stuff or would it be supporting any-and-all IPFS data storage like how bitmessage works?
<SchrodingersScat>
supporting? they only host things that pass through their node, either by surfing around or pinning items explicitly. garbage collection removes anything you've passively collected
Encrypt has quit [Quit: Quit]
Foxcool has joined #ipfs
maxlath has quit [Ping timeout: 240 seconds]
<sn0wmonster>
so if i have a node and i only want to allow images of a certain size, or by a certain keyword, or by a certain element/contents, or host json arrays with a certain particular content, would i be able to see into the contents, and effectively scan to see if it's something i want to store or not?
<sn0wmonster>
if not, i guess that could be mitigated by having a bitmessage broadcast of the hash so that all clients know what to accept and store, and ignore everything else
<SchrodingersScat>
that's beyond my experience
realisation has joined #ipfs
<sn0wmonster>
ty for answering just the same. i'll await an opinion from others
maxlath has joined #ipfs
hashcore has joined #ipfs
nunofmn has joined #ipfs
hashcore has quit [Client Quit]
hashcore has joined #ipfs
<timthelion[m]>
sn0wmonster: it is possible, but it would require either 1) scanning the entire network (impractical) 2) deciding whether to store it when the object is requested at the gateway...
<sn0wmonster>
ah so kind of retroactive sensoring?
<sn0wmonster>
like "oh crap, i'm storing this!?"
mazeinmaze_ has quit [Ping timeout: 268 seconds]
<sn0wmonster>
i guess a whitelist ban is the best way then
<sn0wmonster>
only including hashes that have been notified to an out-of-bound network
<sn0wmonster>
still better than storing the entire image in *that* network
realisation has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
mbags has quit [Remote host closed the connection]
walle303 has quit [Changing host]
walle303 has joined #ipfs
ZarkBit has quit [Quit: Going offline, see ya! (www.adiirc.com)]