zabirauf has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
apiarian has quit [Ping timeout: 260 seconds]
apiarian has joined #ipfs
cemerick has quit [Ping timeout: 264 seconds]
reit has joined #ipfs
shizy has joined #ipfs
shizy has quit [Ping timeout: 240 seconds]
tmg has quit [Ping timeout: 258 seconds]
jaboja has quit [Remote host closed the connection]
tidux has left #ipfs [#ipfs]
wallacoloo has joined #ipfs
tmg has joined #ipfs
mgue has quit [Ping timeout: 250 seconds]
rgrinberg has joined #ipfs
rgrinberg has quit [Read error: Connection reset by peer]
dmr has quit [Ping timeout: 250 seconds]
tmg has quit [Ping timeout: 244 seconds]
avastmick has joined #ipfs
mgue has joined #ipfs
herzmeister has quit [Quit: Leaving]
herzmeister has joined #ipfs
tmg has joined #ipfs
tmg has quit [Changing host]
tmg has joined #ipfs
dmr has joined #ipfs
dmr has quit [Ping timeout: 250 seconds]
avastmick has quit [Quit: avastmick]
tmg has quit [Ping timeout: 244 seconds]
PrinceOfPeeves has quit [Quit: Leaving]
tmg has joined #ipfs
xanza has joined #ipfs
<xanza>
I really don't understand how IPFS can claim to be "the permanent web" while being P2P. If there are no peers for an address, that content is offline and unrechable, no? Not quite permanent.
dmr has joined #ipfs
xanza has quit [Quit: Going offline, see ya! (www.adiirc.com)]
jgantunes has quit [Quit: Connection closed for inactivity]
avastmick has joined #ipfs
ashark has joined #ipfs
M-Purple has left #ipfs ["User left"]
ashark has quit [Ping timeout: 258 seconds]
reit has quit [Quit: Leaving]
avastmick has quit [Ping timeout: 258 seconds]
avastmick has joined #ipfs
mgue has quit [Quit: WeeChat 1.5]
mgue has joined #ipfs
slothbag has joined #ipfs
avastmick has quit [Quit: avastmick]
zabirauf has joined #ipfs
infinity0 has quit [Remote host closed the connection]
infinity0 has joined #ipfs
niekie has quit [Quit: No Ping reply in 180 seconds.]
niekie has joined #ipfs
J1G|Anon126 has joined #ipfs
pfrazee has quit [Remote host closed the connection]
ashark has joined #ipfs
ashark has quit [Ping timeout: 244 seconds]
<J1G|Anon126>
Hi, noob here. This is only tangentially related, but does the Neocities blog have an IPNS name?
<daviddias>
it seems when it got extracted, didn't added tests on the module itself
<dignifiedquire>
yeah I know , adding some tests now
<daviddias>
definitely should have some
Boomerang has joined #ipfs
zero-ghost has quit [Quit: Leaving.]
<santamanno>
guys one question, I’m sending a newsletter with content hosted on IPFS, but if my local node is not running the content does not “stay” on the network
<santamanno>
is that by design?
* dignifiedquire
write all the tests
* dignifiedquire
write all the pull-streams
em-ly has joined #ipfs
pfrazee has joined #ipfs
zero-ghost has joined #ipfs
<daviddias>
dignifiedquire: gets all the love <3
cemerick has quit [Ping timeout: 265 seconds]
<dignifiedquire>
whoo
<dignifiedquire>
santamanno: the reason is that no other node has the content pinned
<santamanno>
dignifiedquire: ok, so as long as I have my node running or someone else has it pinned, it works
<santamanno>
?
<demize>
Yes.
<demize>
But unless another running node has it pinned, there's really no place for the content to be fetched from.
<santamanno>
thanks, but ipns still resolves it even if the underlying content is not pinned, right?
<dignifiedquire>
only nodes that pinned the content will always provide it, if a node loaded it will provide until a gc
<dignifiedquire>
your node has it pinned as it is the node that added the content
<demize>
ipns just resolves to an ipfs hash, and is stored in the DHT for N hours.
<santamanno>
so if I publish, then my node goes down, the ipns resolve fails also?
<santamanno>
(after N hours)
<demize>
Yes.
<santamanno>
crystal clear, thanks
<demize>
Default lifetime is 24 hours.
<demize>
Anyway, consider looking into something like filecoin.
<santamanno>
yeah, bottomline is: pin the content at least at two different nodes ;)
<santamanno>
filecoin for storing content you mean, demize?
Encrypt has joined #ipfs
<demize>
uhuh...
<demize>
Though no updates since 2014 to my knowledge.
ylp has quit [Ping timeout: 260 seconds]
corvinux has quit [Ping timeout: 244 seconds]
<santamanno>
demize: I can always spawn some ipfs daemons here and there
<santamanno>
so foster adption
<santamanno>
;)
MDude has joined #ipfs
<santamanno>
now pardon my ignorance again, if I pin content on multiple machines, then publish it on ipns, then update the content and republish… I have to re-pin on all machines?
<demize>
You pin ipfs objects.
<demize>
ipns just associates a peer ID with an ipfs object
chrisg_ has joined #ipfs
<demize>
short answer: Yes.
shizy has joined #ipfs
<santamanno>
yes, I supposed so… I guess I will have to come up with quick re-pinning strategy
<demize>
santamanno: You could try running the pinbot yourself.
ylp has joined #ipfs
<santamanno>
demize: ok will look it up, no idea what that is =)
<dignifiedquire>
daviddias: that means I'm done with all dependencies for swarm and can do swarm - secio - pull tomorrow
<daviddias>
woot! :D
<dignifiedquire>
(not all of this code is production ready though, especiall pull-net will need some more love before we can release it, but it is in a good enough state for testing concepts and proving that our problems are gone with pull-streams)
<daviddias>
using the bindings to Node.js net package is not better?
<dignifiedquire>
the less node-streams the better, and for now it was much easier to use pull-net than to wrap the node.js net methods
<dignifiedquire>
we are still using node.js, just on a more raw level :)
jedahan has joined #ipfs
<dignifiedquire>
that means from c++ -> libp2p-swarm there is not a single node-stream involved
jedahan has quit [Client Quit]
jedahan has joined #ipfs
mildred has quit [Ping timeout: 276 seconds]
jedahan has quit [Client Quit]
<daviddias>
spdy?
<daviddias>
are you rewriting spdy?
jedahan has joined #ipfs
G-Ray has quit [Quit: Konversation terminated!]
reit has joined #ipfs
kanej has joined #ipfs
Foxcool has quit [Remote host closed the connection]
Tv` has joined #ipfs
Foxcool has joined #ipfs
<ipfsbot>
[js-ipfs] diasdavid created greenkeeper-lodash-4.14.2 (+1 new commit): https://git.io/v6GX6
<ipfsbot>
js-ipfs/greenkeeper-lodash-4.14.2 a030773 greenkeeperio-bot: chore(package): update lodash to version 4.14.2...
<dignifiedquire>
daviddias: damn it, I always forget about spdy..
<dignifiedquire>
arrrrgh
<dignifiedquire>
I think it makes sense to port in the long term but will probably wrap it in the first round
<dignifiedquire>
(I looked at that code while debugging already more than I would like to..)
<cehteh>
is there any http caching proxy using ipfs as backend?
<dignifiedquire>
(that's probably why my brain alway removes it from my todo list :D)
<daviddias>
cehteh: AFAIK no, but I would love to have that
<cehteh>
yeah
<daviddias>
dignifiedquire: ahah yeah, we definitely will eventually migrate it to avoid shims
<daviddias>
but pull-streams is still on experiment phase
<daviddias>
rewriting it would be too much
<dignifiedquire>
until tomorrow :P
<daviddias>
:D
<daviddias>
looking forward for it!
<dignifiedquire>
I'm already planning ahead you know
<cehteh>
i just had the idea, that would be awesome in some ways. giving free projects a chace to build a CDN .. think about OSM or linux distros
cemerick has joined #ipfs
<dignifiedquire>
so daviddias why are we actually using spdy as a stream muxer and not sth else? what are the benefits of it?
<daviddias>
spdy is the stream muxer that we have impl both in go and js
<daviddias>
and it is faster than multiplex
<daviddias>
plus it doesn't break in our stress tests
<daviddias>
(multiplex crashes)
<dignifiedquire>
that's a pretty good reason :D
<daviddias>
you know, we kind of discuss things and evaluate before making decisions :D
<richardlitt>
nicolagreco: It hasn't been scheduled, yet; let's talk about that in the call today
<dignifiedquire>
daviddias: really ;)
<richardlitt>
Hello everyone!
<dignifiedquire>
hello richardlitt
<richardlitt>
Unfortunately, Google Hangouts doesn't want me to start the Hangout today.
<richardlitt>
So, we are going to use zoom
<richardlitt>
Since zoom.us might take time to install and get going, try now
<Mateon1>
Okay... I know why the daemon insists it's running. Chrome is using the localhost:5001 port, for some reason. Still, shouldn't the daemon just launch with an alternate port then?
abbaZaba has joined #ipfs
Stebalien has joined #ipfs
<nicolagreco>
Stebalien: will happen 1:30pm to 2pm Boston time
<nicolagreco>
(if there is no enough time, it will continue 1pm onwards tomorrow)
<Stebalien>
nicolagreco: I'll be back by 1:30 (going to grab lunch).
Stebalien has quit [Remote host closed the connection]
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<nicolagreco>
(off topic): github just banned my profile and made all the issues and repositories I have created hidden to the public
<Stebalien>
nicolagreco: I take it you've been unbanned? Your profile is visible now.
<dignifiedquire>
nicolagreco: looks like you are back :)
<nicolagreco>
I am back!
<edsilv[m]>
nicolagreco (IRC): is it recommended to join the IPLD chat if you're a newbie who's interested in the subject? or should I wait for a stream to be available?
<nicolagreco>
the gods of github decided I was a good person
<nicolagreco>
edsilv[m]: it will be a design conversation, join (maybe just listening)
<edsilv[m]>
nicolagreco (IRC): ok
galois_d_ has joined #ipfs
<jbenet>
noffle <3
<richardlitt>
IPLD should have started
<haad>
daviddias: I'll open an issues in libp2p/pubsub and add the notes from the call, is that ok?
<jbenet>
richardlitt: it's coming
<flyingzumwalt>
richardlitt we're waiting for the new zoom link.
<Kubuxu>
justin_: writing a response , whyrusleeping knows this better but I think will be able to help.
kanej has quit [Client Quit]
palkeo has joined #ipfs
palkeo has quit [Changing host]
palkeo has joined #ipfs
herzmeister has joined #ipfs
neuthral has joined #ipfs
mmuller_ has joined #ipfs
tundracom has joined #ipfs
victorbjelkholm_ has joined #ipfs
yosafbridge` has joined #ipfs
<Mateon1>
Hi, I have a question again. Is adding files ever going to be faster? I'm trying to add a >100 gig ftp archive for transfer, and it's taking forever. Is the limitation the hashing function, or is it ever going to get faster?
Confiks_ has joined #ipfs
livegnik_ has joined #ipfs
nothingm1ch has joined #ipfs
Kubuxu_ has joined #ipfs
BHR27 has joined #ipfs
tundracomp has quit [*.net *.split]
mmuller has quit [*.net *.split]
bigbluehat has quit [*.net *.split]
ehd has quit [*.net *.split]
richardlitt has quit [*.net *.split]
nullstyle has quit [*.net *.split]
Bheru27 has quit [*.net *.split]
neuthral_ has quit [*.net *.split]
bmpvieira has quit [*.net *.split]
Kubuxu has quit [*.net *.split]
yosafbridge has quit [*.net *.split]
nivekuil has quit [*.net *.split]
ELLIOTTCABLE has quit [*.net *.split]
nothingmuch has quit [*.net *.split]
marni has quit [*.net *.split]
mrpoopybuttwhole has quit [*.net *.split]
victorbjelkholm has quit [*.net *.split]
livegnik has quit [*.net *.split]
poga has quit [*.net *.split]
Confiks has quit [*.net *.split]
Poefke has quit [*.net *.split]
mariusz1 has joined #ipfs
Kubuxu_ is now known as Kubuxu
poga1 has joined #ipfs
bmpvieira has joined #ipfs
nivekuil has joined #ipfs
mrpoopybuttwhole has joined #ipfs
bigbluehat has joined #ipfs
richardlitt has joined #ipfs
<Mateon1>
My question caused a net split! :P
Poefke has joined #ipfs
ehd has joined #ipfs
nullstyle has joined #ipfs
<Kubuxu>
Mateon1: server I was connecting is shutting down.
jborak has joined #ipfs
ELLIOTTCABLE has joined #ipfs
<jborak>
is there an ipfs command to show files/dirs I've added.
<jborak>
such that I don't need to memorize or keep track of any hashes
Guest25254[m] has quit [Ping timeout: 276 seconds]
igork[m] has quit [Ping timeout: 276 seconds]
Avinash[m] has quit [Ping timeout: 276 seconds]
TheReverend403[m has quit [Ping timeout: 276 seconds]
M-davidar-test has quit [Ping timeout: 276 seconds]
muhriddin[m] has quit [Ping timeout: 276 seconds]
arby[m] has quit [Ping timeout: 276 seconds]
<Kubuxu>
NOt really.
<Kubuxu>
You can see which hashes were ones you've added or pinned
<Kubuxu>
with ipfs pin ls
M0x52[m] has quit [Ping timeout: 276 seconds]
M-foxxy has quit [Ping timeout: 276 seconds]
M2ezit[m] has quit [Ping timeout: 276 seconds]
<Kubuxu>
but there is not much of metadata there
M-Tribex10 has quit [Ping timeout: 276 seconds]
M-8842 has quit [Ping timeout: 276 seconds]
M-ikreymer has quit [Ping timeout: 276 seconds]
M-16544 has quit [Ping timeout: 276 seconds]
M-abdessamadhoud has quit [Ping timeout: 276 seconds]
M-cyan has quit [Ping timeout: 276 seconds]
JosiahHaswell[m] has quit [Ping timeout: 276 seconds]
M18489[m] has quit [Ping timeout: 276 seconds]
M-apolo11 has quit [Ping timeout: 276 seconds]
M-lucnsy has quit [Ping timeout: 276 seconds]
M-12989 has quit [Ping timeout: 276 seconds]
M-david has quit [Ping timeout: 276 seconds]
<jborak>
when you use the webui and an object is a directory/file it seems to know that, even the name of the original file
<jbenet>
Mateon1: lots of optimization remains to be done, so yes
<Mateon1>
jbenet: Cool, currently it looks like 100gigs is 8 hours
<jborak>
I pinned a directory, run ipfs pin ls and i get a list of hashes with some recursive/indirect tags after the hashes. indirect looks like files, recursive must be directories. These are probably the hashes for the blocks that make up all of the data I pinned, right.
arby[m] has joined #ipfs
M-davidar-test has joined #ipfs
muhriddin[m] has joined #ipfs
TheReverend403[m has joined #ipfs
Guest25254[m] has joined #ipfs
igork[m] has joined #ipfs
Avinash[m] has joined #ipfs
M0x52[m] has joined #ipfs
M-foxxy has joined #ipfs
M2ezit[m] has joined #ipfs
<jbenet>
Mateon1: eh not for me. it depends on a lot of factors. network may also be accidentally rate limiting (which it shouldn't)
erde74 has joined #ipfs
<jbenet>
Mateon1: try adding while offline (without the daemon on)
M-8842 has joined #ipfs
M-ikreymer has joined #ipfs
M-16544 has joined #ipfs
M-cyan has joined #ipfs
M-abdessamadhoud has joined #ipfs
<Mateon1>
jbenet: Thanks for the idea
Encrypt has quit [Quit: Quitte]
M-Tribex10 has joined #ipfs
M18489[m] has joined #ipfs
JosiahHaswell[m] has joined #ipfs
M-lucnsy has joined #ipfs
M-david has joined #ipfs
M-12989 has joined #ipfs
<jborak>
how does the ipfs fuse utility pull the directory/file name information from the blocks
<Mateon1>
jbenet: For some reason I'm getting an "Error: api not running"
M-apolo11 has joined #ipfs
<jborak>
anyway to "walk the tree" beyond the using the hashes?
<jbenet>
Mateon1: hmmm odd -- is there a file: ~/.ipfs/api
galois_d_ has quit [Remote host closed the connection]
<jborak>
maybe I'm using it wrong. does one normally maintain a directory as their ipfs root? So I just put data in there, add it, done.
<jbenet>
jborak: more concrete example? (lots of ways to walk the trees)
<Mateon1>
jbenet: Yes, but I use a custom repo dir. So D:\ipfs\api
galois_dmz has joined #ipfs
<jborak>
like a git workflow
<jbenet>
Mateon1 okay try removing it
<jbenet>
(that file)
<jbenet>
jborak yeah: the `ipfs {add, cat, mount}` are all diffrent porcelain on top of `ipfs object`
<Mateon1>
jbenet: That works, and adding does work way faster without the daemon
<jbenet>
jborak: can implement commits (and we will eventually)
<flyingzumwalt>
CALLS FINISHED!
<Mateon1>
Estimated time is 1 hour instead of 8
<daviddias>
wooot! :D
<jbenet>
Mateon1: yeah. it may take a bit after you boot the deamon for it to propagate content advertisements to the network, so things may not resolve right off the bat when you turn the daemon on.
<dignifiedquire>
thank you everyone for the great calls today!
<jbenet>
"a bit" = a few min
<jborak>
jbenet, when you run ipfs mount, at a high level, is it mounting a single directory you have added to ipfs
<daviddias>
noffle: o/ thank you for listening in on pubsub. I've added you to the pubsub repo on libp2p :)
<jbenet>
yeah nice calls!
<Mateon1>
jbenet: By the way, where can I find the graphing tool for visualizing objects?
<jbenet>
jborak: it's mounting "the dag root" so it's mounting everything at `/ipfs`
<jbenet>
jborak: so `/ipfs/<hash>` works
<jbenet>
jborak but on `/ipns/local` -- yeah that's a dir, or one way to think abou tit.
<jborak>
jbenet, okay so at the end of the day if I want maintain a collection of files I need to keep track of the root dag hash on my own and from there I can use commands like ipfs ls /ipfs/root-dag-hash or ipfs mount /ipfs/root-dag-hash and I'm using it correctly?
<jbenet>
jborak: yep. we want a `ipfs mount <hash> <path>` command that can mount a dir explicitly, but that's a whole other store.
<jbenet>
story*
<jborak>
jbenet, I can understand. I'm obviously new to ipfs but reading your white paper. I'm slowly getting how the system works, very exciting. kudos!!
jeffl36 has joined #ipfs
<jbenet>
jborak thanks! i really need to write a new draft-- a lot has improved
<jbenet>
jborak: but basics are same
<jborak>
jbenet, at some point I'd like to walk through the code. you guys have done a pretty awesome job, the demo video is very impressive.
<achin>
cool! how was... scotland? where were you?
<richardlitt>
That's still a problem?
<achin>
yeah
<richardlitt>
weird. in the folder where you see cli.js and node_modules/, run: npm install lodash
Stebalien has quit [Ping timeout: 252 seconds]
<achin>
ok, i did that, and then a bunch more
<richardlitt>
running 'npm install' should install them all
<achin>
it didn't :(
<achin>
but anyway
<richardlitt>
That's really weird
<achin>
i try to run it, and i have pages full of this:
<achin>
{ [Error: {"message":"API rate limit exceeded for 72.195.154.175. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)","documentation_url":"https://developer.github.com/v3/#rate-limiting"}]
<richardlitt>
ahhhh
<richardlitt>
did I never have you get an API token?
<richardlitt>
I thought I did.
<achin>
nop
<richardlitt>
Ah man, I'm sorry, should have asked if you have ten minutes :P
<richardlitt>
And want to help out
<achin>
no need, i am comfortable saying 'no' if i need to :)
<achin>
"You have triggered an abuse detection mechanism. Please wait a few minutes before you try again."
<richardlitt>
Damn.
<richardlitt>
Alright, that's what I got.
<richardlitt>
That's good to know!
jgantunes has quit [Quit: Connection closed for inactivity]
<richardlitt>
Alright. So, that's my bug.
<richardlitt>
IPFS is too big.
<achin>
time to add some judicious sleep() statements :)
<richardlitt>
Yeah
<richardlitt>
:D
<richardlitt>
I have no idea where.
<richardlitt>
thank you so much for helping
<achin>
no prob
<richardlitt>
For the lodash problem
<richardlitt>
I see: lodash, depaginate, and moment
<richardlitt>
any other deps I am missing?
<achin>
octokat
<achin>
those are the only 4 i had to install
<richardlitt>
thanks. Fixed issue.
<richardlitt>
:)
<achin>
does anyone have any up-dated status on ipfs over a upd-based transport?
<richardlitt>
Not me.
<richardlitt>
be back in a bit
<achin>
k
Stebalien has joined #ipfs
flyingzumwalt has quit [Quit: Leaving.]
<demize>
Don't think there is any updates on it.
mildred has joined #ipfs
cemerick has quit [Read error: Connection reset by peer]
<achin>
i'm hoping UDP will solve my issues with my cable modem
<demize>
In general UDP seems to mostly do the opposite. ;p
santamanno has joined #ipfs
<Mateon1>
The flatdb format for blocks doesn't seem to scale well when you have 100GB of objects in your node...
<Mateon1>
Actually, I have `only` ~10 gigs, and it's already crawling.
<Stebalien>
area: what demize said. Both UDP and IPv6 tend to cause my router to fall over in a matter of minutes.
Encrypt has joined #ipfs
<Mateon1>
The issue with the flatDB, is that we have too many buckets. I only have an average of 2 hashes per bucket
<achin>
my router is fine
<achin>
it's my cable modem that has problems
pfrazee has quit [Remote host closed the connection]
flyingzumwalt has joined #ipfs
<richardlitt>
back
kanej has joined #ipfs
byteflame has quit [Ping timeout: 260 seconds]
kanej has quit [Client Quit]
kanej has joined #ipfs
jborak_ has joined #ipfs
Stebalien has quit [Ping timeout: 264 seconds]
jborak has quit [Read error: Connection reset by peer]
ianopolous has quit [Read error: Connection reset by peer]
<Mateon1>
I have no idea how or why, but I just got a panic on the daemon. Invalid memory or nil dereference
ianopolous has joined #ipfs
<richardlitt>
:/ That's not good.
pfrazee has joined #ipfs
<richardlitt>
Post it in a relevant (or new) issue in ipfs/support? or in ipfs/go-ipfs if you've got any ideas.
<richardlitt>
Not sure how else I can help.
<Mateon1>
Oh, I got an idea. I removed all empty directories within the blockstore...
<Mateon1>
Removing things from under the daemon's nose would do that
jedahan has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
jedahan has joined #ipfs
jedahan_ has joined #ipfs
<Mateon1>
I break everything... Now `ipfs add` doesn't show any progress, just "[size] GB +Inf % -2m25s"
jedahan has quit [Ping timeout: 240 seconds]
<jborak_>
is it possible to update an existing directory with new files or updated files with ipfs?
<jborak_>
example, can I perform an ipfs add -r SOME_DIR, than add a file to SOME_DIR and add it using ipfs add SOME_DIR/newfile.txt
<jborak_>
I tried doing this example but the new file isn't listed when I invoke ipfs ls /ipfs/SOME_DIR_HASH
<jborak_>
I can only seem to use the hash for newfile.txt if I want to retrieve or list it.
<jborak_>
okay... using the hash is only way to retrieve. I mean to say that when I list the directory I added (recursively) I do not see any new files I add after the fact but exist in that directory.
<achin>
the only thing you can do is add the directory again, after you added newfile.txt
Stebalien has joined #ipfs
<achin>
$SOME_DIR_HASH will remain unchanged, forever
<jborak_>
I see
<jborak_>
So I just the new hash after I add the files recursively again
<jborak_>
when you perform ipfs dns ipfs.io and you receive /ipfs/HASH you can list the contents contained in the hash of / for ipfs.io. Does ipfs.io have to update this value in their DNS TXT records everytime they make a change to their site?
<richardlitt>
Anyone want to help me fix rate limits in GitHub :|
Guest98040 has quit [Ping timeout: 250 seconds]
<jborak_>
richardlitt, is github rate limiting you?
<richardlitt>
Yes
<deltab>
when doing what?
<achin>
richardlitt: do you know how many requsts you are making per second?
<richardlitt>
Hitting it with a couple of hundred requests in quick succession
<jborak_>
I'd say try a proxy/vpn if you can stand one up quickly on a VM somewhere
<richardlitt>
No, I'm not sure how to get that information
<richardlitt>
jborak_: Not sure that will work, using an authenticated API, which gives me 5000 extra
<achin>
are you using a library to make the API calls?
<richardlitt>
I just keep getting automatically triggering
<richardlitt>
Yeah, octokat
<jborak_>
richardlitt, probably tracking your rate via your token, guess you have to wait and take it easy on then ;)
<jborak_>
*them
<richardlitt>
Any idea how to track my outbound API requests?
<richardlitt>
Perhaps using wireshark? Can't figure out how to just see stuff coming from my CLI to GitHub with Wireshark
<achin>
the dumbway would be with wireshark
<achin>
but maybe there is a smarter way :)
<deltab>
richardlitt: the response has headers for the limit, the number of requests remaining, and the reset time
<deltab>
what's the client?
jedahan_ has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<richardlitt>
I'm making a lot of requests. Not sure where to console log.
<richardlitt>
using a cli tool I have to interact with the GitHub api, deltab
jedahan has joined #ipfs
<deltab>
what's it written in?
<deltab>
not sure where to log because of the volume, or some other reason?
<richardlitt>
Nodejs
<richardlitt>
because I have multiple points in the code where I hit the API
LegalResale has quit [Quit: Leaving]
<richardlitt>
And lots of promises and and mapped arrays with calls inside them