<stellar-slack> <hsxuif> @jed: maybe the link for "run a node" is incorrect? it points to https://www-stg.stellar.org/developers/stellar-core/learn/admin.html
<stellar-slack> <jed> that is the best doc for setting up a node right now
pixelbeat has quit [Ping timeout: 240 seconds]
sacarlson has joined #stellar-dev
<stellar-slack> <buhrmi> mhhh getting txBadAuths :(
<stellar-slack> <sacarlson> on testnet?
<stellar-slack> <buhrmi> live
<stellar-slack> <sacarlson> I didn't even know we had a live? how did you fund the account?
<stellar-slack> <sacarlson> I saw what they called release of live but no one answered my question as to how to fund them
<stellar-slack> <buhrmi> to be honest, i don't know... but i have 20 balance https://horizon-live.stellar.org/accounts/GAAZI4TCR3TY5OJHCTJC2A4QSY6CJWJH5IAJTGKIN2ER7LBNVKOCCWN7
<stellar-slack> <sacarlson> cool but with 20 you can't add any lines of credit you need 30 I think
<stellar-slack> <sacarlson> 20 is min so you can't even send a native transaction
<stellar-slack> <buhrmi> its 200,000,000 / Stellar::ONE = 20
<stellar-slack> <sacarlson> well yes you need a min 200,000,000 then
<stellar-slack> <buhrmi> it's the root account lol
<stellar-slack> <sacarlson> oh so the root account master is still open?
sacarlson has left #stellar-dev [#stellar-dev]
<stellar-slack> <sacarlson> I looked to see if I could bring up live-net with my stellar-core but I fail to find the #[HISTORY.h1] locations for live in any docs so far
<stellar-slack> <lab> i'm sorry, the root account is disabled by me. :)
<stellar-slack> <sacarlson> I hope you took some coins first
<stellar-slack> <lab> yes, 100 billion Lumens..
<stellar-slack> <sacarlson> good send me 10k later then
<stellar-slack> <lab> if i had some. i will donate those lumens to SDF. :)
<stellar-slack> <lab> otherwise they won't continue developing stellar-core..
<stellar-slack> <buhrmi> so who controls the address with the billions of lumens? i cant do anything
<stellar-slack> <sacarlson> my guess is they are doing preliminary tests on it before they start distribution
<stellar-slack> <sacarlson> but never hurts to ask buhrmi
TheSeven has quit [Disconnected by services]
[7] has joined #stellar-dev
nivah has quit [Ping timeout: 246 seconds]
<stellar-slack> <sacarlson> I'm not sure this makes sense but I would like some ideas of how to setup the infrastructure for a dynamic multi sign account to allow automated joining and exiting from a joint account
<stellar-slack> <sacarlson> this is the framework I was thinking of so far but I don't think it will work as stated
<stellar-slack> <sacarlson> this would allow my poker players to enter a game that is already in play or exit a game that continues with safe transition of entry and exit from a joint multi signed account
<stellar-slack> <hsxuif> @jed: but access to that link requires admin privilege o:)
<stellar-slack> <hsxuif> seems so, not quite sure
<stellar-slack> <sacarlson> hsxuif: oh you want to run stellar-core? I'm not totally sure what the diff from node or validator is but to start you need to install stellar-core
<stellar-slack> <buhrmi> http://stellar-core.org now running on live net :)
<stellar-slack> <buhrmi> so empty lol
<stellar-slack> <sacarlson> oh cool you got it buhrmi what's the config file for it?
<stellar-slack> <buhrmi> u can use that one https://github.com/stellar/docs/blob/master/validators.md
<stellar-slack> <sacarlson> cool thanks buhrmi I'm now hooking in
<stellar-slack> <sacarlson> I'm using vpn so I'll be looking like I'm in singapoor
<stellar-slack> <sacarlson> I got sync and show 8 peers why don't I see the peer list on your server buhrmi ?
<stellar-slack> <buhrmi> cause i broke the peer list, let me redeploy
<stellar-slack> <sacarlson> ok
<stellar-slack> <sacarlson> I failed to enter a database so it came up but no data to look at. I didn't even know it would run without at least some database like sqlite
<stellar-slack> <buhrmi> peer list is back
<stellar-slack> <sacarlson> I show that there are 6 accounts
sacarlson has joined #stellar-dev
<stellar-slack> <lab> right
<stellar-slack> <lab> biggest is sdf, big but not biggest is helpsdf..
<stellar-slack> <lab> others is mine...
<stellar-slack> <lab> :)
<stellar-slack> <lab> rocket is already refueled..
<stellar-slack> <buhrmi> can u fund me GDVXG2FMFFSUMMMBIUEMWPZAIU2FNCH7QNGJMWRXRD6K5FZK5KJS4DDR
<stellar-slack> <sacarlson> to prove you own it change your home_domain to lab :grinning:
<stellar-slack> <lab> :)
<stellar-slack> <sacarlson> otherwise how will we know what to point the inflation money to?
<stellar-slack> <lab> buhrmi: sdf's accounts is not setup well, after that they will fund us
<stellar-slack> <buhrmi> smooth launch so far
<stellar-slack> <eno> my validator runs well
<stellar-slack> <sacarlson> my validator will crash within 24 hours when my ip address is changed by my ISP
<stellar-slack> <sacarlson> shouldn't say crash, it just locks up in non sync state
de_henne has quit [Remote host closed the connection]
de_henne has joined #stellar-dev
<stellar-slack> <eno> _zombie _
<stellar-slack> <eno> why `https://horizon-live.stellar.org/accounts` have not show `GAAZI4TCR3TY5OJHCTJC2A4QSY6CJWJH5IAJTGKIN2ER7LBNVKOCCWN7`?
<stellar-slack> <buhrmi> probably because it's the root account
<stellar-slack> <eno> you are right, it's the root account. but still confused .
pixelbeat has joined #stellar-dev
sacarlson has left #stellar-dev [#stellar-dev]
<stellar-slack> <2667273151> :kissing_closed_eyes: :kissing_closed_eyes: :kissing_closed_eyes:
<stellar-slack> <buhrmi> :heart_eyes: :kissing_heart::kissing_closed_eyes:
<stellar-slack> <lab> buhrmi: are you collecting peers healthy data?
<stellar-slack> <lab> MTBF, MTTR, MTTF etc.
Kwelstr has quit [Ping timeout: 240 seconds]
Kwelstr has joined #stellar-dev
pixelbeat has quit [Ping timeout: 260 seconds]
pixelbeat has joined #stellar-dev
<stellar-slack> <donovan> So, how do we get some Lumens to test out the new network? :)
<stellar-slack> <sacarlson> good question donovan. I have no idea
<stellar-slack> <sacarlson> I can only tell you last I looked there was only 6 accounts on the network
<stellar-slack> <lab> jed [12:42 PM] oh you don't have to. SDF will fund people in the dev channel
<stellar-slack> <donovan> Basic question, how do I generate a key pair?
<stellar-slack> <lab> it's the same as you gen node_id
<stellar-slack> <sacarlson> in that case he could generate one here: https://www.stellar.org/developers/tools/client/
<stellar-slack> <donovan> so, just `stellar-core —genseed`
<stellar-slack> <lab> yes
<stellar-slack> <sacarlson> ya that would also work
<stellar-slack> <bartek> you can also use js/ruby/go base lib and use `Keypair.random()` function
<stellar-slack> <donovan> Great, thanks! Anyone want to sent a Lumen to `GAORN5O6AQUHW3F6ZVOTN67RAZSONRNKP7WOHZ4XBHDMRKKLBTFTSNC6` I’d be grateful :)
<stellar-slack> <donovan> Next basic question, is there a minimum funding amount per account, a la Ripple?
<stellar-slack> <donovan> i.e. reserves
<stellar-slack> <donovan> Looks like I need 11 XLM :)
<stellar-slack> <donovan> Oops, 21 XLM!
<stellar-slack> <sacarlson> and my hopeful stellar address if anyone needs to know: GDW3CNKSP5AOTDQ2YCKNGC6L65CE4JDX3JS5BV427OB54HCF2J4PUEVG
<stellar-slack> <lab> tip me: GAS2FDJIROHCJDM43TKDOPDSCYMVPGMULGF42QR65FINKVXHNDJTJC6E
<stellar-slack> <sacarlson> donovan: and more if you hope to have any added signers on your account they cost an aditional 10 XLM each as far as I know
<stellar-slack> <donovan> Next stop, vanity account generator :)
<stellar-slack> <lab> 21 is not enough, fee is expansive.
<stellar-slack> <donovan> Who’s the lucky `GAENIE5LBJIXLMJIAJ7225IUPA6CX7EGHUXRX5FLCZFFAQSG2ZUYSWFK`?
<stellar-slack> <lab> fyi: Keypair.random in js stellar base runs at ~200/s
<stellar-slack> <sacarlson> lab the fee is only expesive for people using the js sdk that cost 10X more than it does for ruby
<stellar-slack> <lab> helpsdf, donovan
<stellar-slack> <donovan> @lab, not sure I’ll be using js :)
<stellar-slack> <lab> ruby code is unchanged.
<stellar-slack> <lab> i was told is about 2k/s in go
<stellar-slack> <sacarlson> there is a bug in js sdk that charges 1000 stroops instead of 100 stroops
<stellar-slack> <donovan> Sounds about right :)
<stellar-slack> <donovan> I’m sure stellar doesn’t have tx ordering issues though :)
<stellar-slack> <lab> sacarlson, i see, i didn't use js sdk but js base
<stellar-slack> <sacarlson> yes I think the bug is in js.base I should assume
<stellar-slack> <sacarlson> but js sdk uses js.base so ...
<stellar-slack> <lab> tx ordering...did any one really made profit in front running?
<stellar-slack> <donovan> Well, I made plenty of profit from being the first arbitrageur in a lot of ledgers. There was one account that looked like it might be front-running...
<stellar-slack> <lab> i wanted to make a expoint months ago, but i thought there may be many competitor and abandoned
<stellar-slack> <donovan> To be honest, spinning the tx hashes was the easy bit. Correctly calculating the offer funding and checking all the account flags was the hard bit :)
<stellar-slack> <donovan> Also, working out best paths from all possible paths in a small amount of time.
<stellar-slack> <lab> there was many deep dump/pump in ripple
<stellar-slack> <donovan> Yup.
<stellar-slack> <lab> front running should be profitable.
<stellar-slack> <donovan> Well, they’ve finally pushed a fix.
<stellar-slack> <hsxuif> why is there an "end of file" error here: https://gist.github.com/HongxuChen/01bfc1240eec3c45af6d ?
<stellar-slack> <donovan> Looks like a network connection got closed...
<stellar-slack> <hsxuif> so it results from "connection refused", right?
<stellar-slack> <sacarlson> does it fail to come up hsxuif? you might show us your config file for us to take a look
<stellar-slack> <sacarlson> wrong network setting maybe or running wrong version of stellar-core?
<stellar-slack> <hsxuif> the HEAD is 51e297a
<stellar-slack> <donovan> You might to changed your `NODE_SEED` again now :)
<stellar-slack> <hsxuif> i'll change later :)
<stellar-slack> <donovan> I guess 54.211.174.177 and 54.161.82.181 might just not be running stellar-core at that point in time.
<stellar-slack> <sacarlson> hsxuif: alot of strange things in your config file. you want to just try mine?
<stellar-slack> <donovan> You did `git checkout v.0.1`
<stellar-slack> <hsxuif> @sacarlson: where can i find it? would u please share?
<stellar-slack> <sacarlson> yes but I note your commit version is weard also
<stellar-slack> <sacarlson> should be around 59a2dea7e8a24f8a301e657c1a7445ff094b14d4
<stellar-slack> <hsxuif> @donovan: guess that the latest change to v0.1 is the install.md?
<stellar-slack> <donovan> who knows :-) `git log` is your friend. I’m just going to be deploying tagged releases to save all this confusion :)
<stellar-slack> <sacarlson> this is the config I've been using as of late:
<stellar-slack> <sacarlson> for testnet
<stellar-slack> <sacarlson> I also have a working live net config
<stellar-slack> <hsxuif> i think the commit is correct, since the long hash is the same
<stellar-slack> <hsxuif> @sacarlson: so you are mostly using https://github.com/stellar/stellar-core/blob/master/docs/stellar-core_testnet.cfg , right?
<stellar-slack> <donovan> Anyone got a link to an example payment transaction which includes a path? Only found this: https://www.stellar.org/developers/learn/concepts/list-of-operations.html#path-payment
<stellar-slack> <sacarlson> never tried it
<stellar-slack> <sacarlson> hsxuif: yes your commit looks good it's only got one small change in a doc file over the main release
<stellar-slack> <donovan> ```NETWORK_PASSPHRASE="Public Global Stellar Network ; September 2015”``` Looks different
<stellar-slack> <dzham> horizon doesn’t generate paths yet, no? so you’d still have to generate them manually anyway
<stellar-slack> <donovan> @dzham: Just want to look at an example path.
<stellar-slack> <dzham> gotcha
<stellar-slack> <sacarlson> that's the live nework passphrase
<stellar-slack> <donovan> @sacarlson: That’s what I’m running on :)
<stellar-slack> <dzham> js-stellar-sdk still doesn’t work for me
<stellar-slack> <sacarlson> but nothing to play with on live yet unless you have some funds
<stellar-slack> <donovan> I just thought that might be a problem with @hsxuif config… Who knows, I’m just a newbie :)
<stellar-slack> <sacarlson> I gave hsxuif a testnet config assuming that's what he wanted
<stellar-slack> <hsxuif> @donovan: also a newbie here, but thanks very much
<stellar-slack> <hsxuif> @sacarlson: guess that i can use testnest cfg for now
<stellar-slack> <hsxuif> thank u both
<stellar-slack> <sacarlson> ya you can do more with it at this time anyway
<stellar-slack> <hsxuif> just haven't got many ideas how to configure that, :S
stellar-slack has quit [Remote host closed the connection]
stellar-slack has joined #stellar-dev
<stellar-slack> <sacarlson> hsxuif: I sent a copy of my config didn't you get it?
<stellar-slack> <donovan> I think there are too many things to configure. Sensible defaults with only the `QUORUM_SET`,`NODE_SEED` and `NETWORK_PASSPHRASE` and needing to be configured would be ideal. Would be good if there was some kind of DNS bootstrapping for `KNOWN_PEERS` and all the history stuff. If all the files lived under a default directory as well would save some pain.
<stellar-slack> <donovan> stellar-core should validate the config fully at startup as well, rather than when catchup starts.
<stellar-slack> <hsxuif> @sacarlson: sorry don't know how to get that? you mean https://stellar-public.slack.com/files/sacarlson/F0BLP7DQ8/-.yaml ?
<stellar-slack> <sacarlson> ya that looks to be it
<stellar-slack> <donovan> Is mine. Some of the history curl commands didn’t work so had to comment them out.
<stellar-slack> <hsxuif> @donovan: yeah, the validation should be much earlier
<stellar-slack> <hsxuif> thanks for sharing
<stellar-slack> <sacarlson> just plug them all in and try them all
<stellar-slack> <sacarlson> the last 2 look like they should but donovan's is for live net so you can't play with any transactions with it
<stellar-slack> <sacarlson> but you can view the database to see what activity you see but that can be seen here: http://stellar-core.org/
<stellar-slack> <sacarlson> almost by beer drinking hour so last chance to ask me any questions. the main shift will be here soon with better answers than me
<stellar-slack> <hsxuif> another issue: after key interruption (Ctrl-C), stellar-core failed to exit normally (only once): https://gist.github.com/HongxuChen/e6b47bb86f0459fd56ef
<stellar-slack> <sacarlson> so what you had to hit ctrl-c 2 times?
<stellar-slack> <hsxuif> no, only `kill -9` solves
<stellar-slack> <dzham> can anyone submit transactions to horizon? what format are you POSTing in, in that case?
<stellar-slack> <sacarlson> I'm not sure you guys have the needed scripts to fully reset and normaly start stellar-core that I guess I should share. oh I don't recall having to do that before
<stellar-slack> <hsxuif> @donovan: tried your conf, and it seems working
<stellar-slack> <hsxuif> :)
<stellar-slack> <sacarlson> oh cool
<stellar-slack> <bartek> @donovan: `application/x-www-form-urlencoded`
<stellar-slack> <bartek> I fixing this in js-sdk right now
<stellar-slack> <dzham> OK, I’ll wait.. got my head too deep into go ATM
<stellar-slack> <sacarlson> this is how I do full resets and startup of stellar-core not sure if it the best way but it has been working for me https://github.com/sacarlson/stellar_utility/tree/master/stellar-db2
<stellar-slack> <sacarlson> ok beer time
<stellar-slack> <eno> why not try the live network ? @hsxuif
<stellar-slack> <hsxuif> @donovan: one more "end of file" error, but there are new connections
<stellar-slack> <donovan> @hxsuif Looks like some of the S3 paths changed: https://github.com/stellar/docs/blob/master/validators.md arghhh :)
<stellar-slack> <hsxuif> @eno: trying
<stellar-slack> <hsxuif> so http://stellar-core.org|stellar-core.org is for live network, right?
<stellar-slack> <hsxuif> my ip is in the peer list, but still there is "end of file"
<stellar-slack> <donovan> My node doesn’t do the catchup stage anymore when I restart. No idea why :)
<stellar-slack> <hsxuif> @donovan: okay, thanks
<stellar-slack> <donovan> If I want to trigger catchup: https://github.com/stellar/stellar-core/blob/master/docs/learn/commands.md#http-commands How do I know what ledger to use?
<stellar-slack> <bartek> @donovan: you can set `CATCHUP_COMPLETE=true` in your config and your node will perform a complete catchup
<stellar-slack> <donovan> @bartek: I have that set, but it doesn’t start.
<stellar-slack> <bartek> there is no info about a catchup in stellar-core logs?
<stellar-slack> <donovan> No, the usual stuff has stopped appearing.
<stellar-slack> <eno> my validator lost connect. what i can see is dropping , handshake, connect ,read timeout , even after a reboot.
<stellar-slack> <scott> @donovan: does the last line of output appear to be truncated? I’m having an issue where it appears that one of the forked processes using in acquiring history is mucking with stdout.
<stellar-slack> <donovan> @scott, no only one stellar process running: ```ps aux | grep stellar root 4280 0.2 0.3 124780 12724 pts/7 Sl+ 14:19 0:00 stellar-core root 4286 0.0 0.0 9732 2064 pts/14 S+ 14:19 0:00 grep --color=auto stellar```
<stellar-slack> <scott> it forks, finishes, and exits pretty quickly in normal cases.
<stellar-slack> <donovan> No stdout corruption, just the previously witnessed catchup log lines no longer appear. How do I get more logging?
<stellar-slack> <scott> Add a line like the linked to your config: https://github.com/stellar/stellar-core/blob/master/docs/stellar-core_standalone.cfg#L16
<stellar-slack> <scott> Or, an http get to `/ll?level=debug` for a running instance
<stellar-slack> <lab> my validator is down
<stellar-slack> <lab> 2015-10-01T12:53:35.087 fa34c0 [] [Ledger] INFO Closed ledger: [seq=14065, hash=b056da] Socket timeout, please try again later. 2015-10-01T12:53:36.403 fa34c0 [] [Process] WARN process 4615 exited 1: osscmd -H http://oss-cn-beijing.aliyuncs.com|oss-cn-beijing.aliyuncs.com put tmp/snapshot-8564d3f1fdbd4536/ledger-000032bf.xdr.gz oss://stellar/xlm/ledger/00/00/32/ledger-000032bf.xdr.gz 2015-10-01T12:53:36.404 fa
<stellar-slack> action failed on ledger-000032bf.xdr stellar-core: history/PublishStateMachine.cpp:204: void stellar::ArchivePublisher::enterSendingState(): Assertion `mState == PUBLISH_OBSERVED || mState == PUBLISH_SENDING' failed. Aborted (core dumped) root#
<stellar-slack> <lab> maybe that why network halted
<stellar-slack> <scott> I’m sure the core team guys would be really interested in that core dump if youre willing to provide it
<stellar-slack> <nelisky> did you guys break the network? :)
<stellar-slack> <nelisky> 2015-10-01T15:30:14.359 4d1d69 [139944491358080] [default] INFO { "info" : { "build" : "v0.1-1-g51e297a", "ledger" : { "age" : 3059, "closeTime" : 1443706755, "hash" : "5ab813a4ccb1283fb58dc509ca29e0ceedf2e8c65fd6765849fafe3f0d86d53e", "num" : 14606 }, "network" : "Public Global Stellar Network ; September 2015", "numPeers" : 8, "prot
<stellar-slack> "Joining SCP" } }
<stellar-slack> <donovan> @lab `ulimit -c unlimited`
<stellar-slack> <donovan> @scott I get the feeling that using system utilites and curl,aws, osscmd, etc. is a bit fragile with regards to getting history. How are the timeouts being handled without using a native C++ client?
<stellar-slack> <scott> that’d be something to ask @monsieurnicolas or @graydon. I don’t have the experience with that code base to properly answer
<stellar-slack> <eno> so many version in peers list : v0.1 "v0.1-1-g51e297a" "59a2dea" "d3adf91"
<stellar-slack> <donovan> @nelisky: Have you added your own patches to the tagged release?
<stellar-slack> <eno> DEBUG Slot::processEnvelope i: 14606 {ENV@ea5ad3 | i: 14606 | NOMINATE | :anguished: fa83bf | X: { '[ txH: ef5b56, ct: 1443706760, upgrades: [ ] ]',} | Y: {} }
<stellar-slack> <lab> donovan, did you get ledger seq=14605?
<stellar-slack> <eno> stuck at the 14606
<stellar-slack> <donovan> @lab, no idea :) I’ve already done a newdb :)
<stellar-slack> <lab> anyone else get this in log ? Got consensus: [seq=14605, prev=61ba9b, tx_count=0, sv: [ txH: 470fb4, ct: 1443706755, upgrades: [ ] ]]
<stellar-slack> <lab> i will add your node_id to my quorum slice
<stellar-slack> <lab> please check
<stellar-slack> <lab> donova, it's a pitty
<stellar-slack> <lab> we need same LCL to forcescp to restart
<stellar-slack> <donovan> I think it’s much better to avoid the “missing ledgers fiasco” of another network and only start the live network when these type of bugs have been ironed out :)
<stellar-slack> <lab> thanks scp, the ledgers is not missing :)
<stellar-slack> <lab> it's also a change to upgrade my quorum set to 7 :)
<stellar-slack> <hsxuif> @lab: can u share your validators?
<stellar-slack> <lab> it's just the same as https://github.com/stellar/docs/blob/master/validators.md
<stellar-slack> <nelisky> @donovan: what patches are those?
<stellar-slack> <lab> nelisky, what your last lcl?
<stellar-slack> <donovan> @nelisky: Dunno, just wondered how you got `v0.1-1-g51e297a` in your version field?
<stellar-slack> <nelisky> oh, no idea, that was HEAD when I cloned, I guess
<stellar-slack> <eno> Loaded last known ledger: [seq=14605, hash=5ab813]
<stellar-slack> <lab> eno, your lcl is 14605, it's ok. i will add your node to quorum
<stellar-slack> <lab> it's 4 already, we need 5 at least
<stellar-slack> <nelisky> @lab 14606
<stellar-slack> <lab> good, i will add yours
<stellar-slack> <donovan> @lab, where you only trusting your own servers previouly?
<stellar-slack> <lab> there are 5 lcl 14606 in 7, donovan is one of 2 in 7
<stellar-slack> <lab> one free seat, who will get on? buhrmi?
<stellar-slack> <lab> how is you node?
<stellar-slack> <eno> http://stellar-core.org|stellar-core.org shows 14605
<stellar-slack> <lab> good. here we go. i am changing quorum set
<stellar-slack> <lab> @donovan: plus your's
<stellar-slack> <lab> FAILURE_SAFETY=2 [QUORUM_SET] VALIDATORS=[ "GD5DJQDDBKGAYNEAXU562HYGOOSYAEOO6AS53PZXBOZGCP5M2OPGMZV3", "GBGGNBZVYNMVLCWNQRO7ASU6XX2MRPITAGLASRWOWLB4ZIIPHMGNMC4I", "GDPJ4DPPFEIP2YTSQNOKT7NMLPKU2FFVOEIJMG36RCMBWBUR4GTXLL57", "GBGR22MRCIVW2UZHFXMY5UIBJGPYABPQXQ5GGMNCSUM2KHE3N6CNH6G5", "GAOO3LWBC4XF6VWRP5ESJ6IBHAISVJMSBTALHOQM2EZG7Q477UWA6L7U", "GDVFVU7KURZ22MO56MP7IWX7Q4PEMHFQCOBTJVD2IR4OITEAUN64CCT2",
<stellar-slack> "GB6REF5GOGGSEHZ3L2YK6K4T4KX3YDMWHDCPMV7MZJDLHBDNZXEPRBGM" ]
<stellar-slack> <lab> glad to have one more failure safety
<stellar-slack> <nelisky> @lab update and restart on our side?
<stellar-slack> <lab> just ctrl-x to stellar-core and
<stellar-slack> <lab> run "stellar-core --forcescp && stellar-core"
<stellar-slack> <lab> @nelisky: @eno: @buhrmi , here we go
<stellar-slack> <lab> need 5 force to start
<stellar-slack> <lab> ctrl-c
<stellar-slack> <nelisky> right...
<stellar-slack> <eno> 2015-10-01T11:03:38.100 1cedae [] [Herder] INFO Lost track of consensus
<stellar-slack> <lab> it's ok, just wait
<stellar-slack> <nelisky> not much has changed from my point of view :)
<stellar-slack> <eno> same
<stellar-slack> <lab> wait me
<stellar-slack> <lab> my nodes restarted with forcescp, did you? @nelisky @eno @buhrmi
<stellar-slack> <nelisky> @lab, yes, as instructed
<stellar-slack> <graydon> donovan: the downloading thing retries on failure. it does not time out a given download attempt, though I suppose I could add timeouts to the process-runner. I believe TCP connection timeouts will occur on a connect failure; I'm not sure if there's a corner case in there where a very slow HTTP transaction could suspend it indefinitely.
<stellar-slack> <graydon> donovan: however the subprocess runner also runs several processes in parallel (configurable, default is 32) so if 1 or 2 hang for quite a while, the other 30-or-so will carry on chewing through downloads
<stellar-slack> <graydon> donovan: in practice we haven't seen problems around timeouts, but I'm happy to tweak this / try to improve it
<stellar-slack> <donovan> @graydon: I manually entered some of the curl commands for the CDN’s based in China and they did block for quite some time.
<stellar-slack> <graydon> interesting. so .. as a short-term workaround, curl itself has a command-line timeout you can set
<stellar-slack> <donovan> I guess the point is that’s it quite hard to get cause of failure when using subprocesses, without parsing text. A HTTP client would be a bit more informative for marking bad URL’s.
<stellar-slack> <graydon> I suspect if we did this at the process-runner level, we'd need a timeout that increased with retries, in case it killed an in-process-but-kinda-slow download. it's hard to differentiate those.
<stellar-slack> <graydon> that's true. I guess it would be possible to build in an HTTP client as well, I'm just hesitant about the expansion of surface area (code-wise, testing, attack surface, interaction with our event loop, etc.)
<stellar-slack> <donovan> Another option is to just deliver history through an XDR message...
<stellar-slack> <graydon> (I mean technically we _have_ an HTTP client built in already, it's just far too minimal -- and synchronous -- to use at any scale. it's only used for injecting commands into running stellar-core processes' HTTP interfaces)
<stellar-slack> <graydon> near the beginning of the rewrite we wrestled with that a lot (whether to send history over the p2p layer)
<stellar-slack> <graydon> I argued a lot then, and feel pretty similarly now, that it's better to go _via_ a history archive so that we ensure the archives always exist
<stellar-slack> <graydon> keeping in mind that history isn't kept anywhere on the node. if the archives fail, history's gone.
<stellar-slack> <donovan> It’s great that history is available over HTTP for analysis and processing, I just get a bit nervous about all the configuration to make it work. I made lots of mistakes getting my config file correct.
<stellar-slack> <donovan> Maybe that says more about me :)
<stellar-slack> <graydon> history has much higher long-term availability requirements, and much lower I/O requirements, than the live node data. so I think it makes sense to keep it off-node. I hear what you're saying about config files. not sure what to do about it, is all.
<stellar-slack> <graydon> HTTP built in would help for the get-side
<stellar-slack> <graydon> for the put-side, people have a lot of ways they're going to want to publish data
<stellar-slack> <graydon> if it were easier to diagnose failures in it, would that help?
<stellar-slack> <graydon> like --newhist is sufficient for debugging, but that's not the most obvious
<stellar-slack> <scott> Seems like we should publish a config file with a history config pointing to SDF’s archive
<stellar-slack> <donovan> RE: config, would be good to just set a BASE_DIR and all directories default to subdirectories of that directory. If someone wants to override it they could, but sensible defaults is always a winner.
<stellar-slack> <graydon> maybe something like --test-history
<stellar-slack> <donovan> OPTIONS is a good way of testing HTTP endpoints
<stellar-slack> <graydon> donovan: you mean rather than TMP and BUCKETS? there's a bug open on that, yeah. I mean to get around to it. hasn't been high enough priority yet, lessee.. https://github.com/stellar/stellar-core/issues/367
<stellar-slack> <graydon> donovan: well we only have the two commands in config, get and put. but we could test get commands by getting the .well-known/stellar-history.json file from an archive.
<stellar-slack> <graydon> testing put commands is trickier since we have no delete command. no way to put a random temp file then delete it
<stellar-slack> <donovan> There’s always the C+11 http://en.cppreference.com/w/cpp/io/c/tmpfile
<stellar-slack> <donovan> I guess the config validation could be just checking that all named directories are writable (with a quickly deleted testfile) at startup rather than when the catchup process tries to put the file in the directory.
<stellar-slack> <donovan> But yeah, for HTTP upload, it’s all a bit more complicated...
<stellar-slack> <donovan> The joys of Go, all the HTTP and platform-independent OS stuff is in the standard library :)
<stellar-slack> <graydon> well, HTTP "upload" isn't really enough to put resources on most places people want to put them
<stellar-slack> <graydon> if we had reasonable C++ client libs for things-that-speak-S3 it might be close enough. that and maybe sftp. but as it stands, well ...
<stellar-slack> <graydon> C++ doesn't get a lot of love in the web-services space
<stellar-slack> <donovan> Nope :)
<stellar-slack> <donovan> This did pop up recently though: https://aws.amazon.com/blogs/aws/introducing-the-aws-sdk-for-c/
<stellar-slack> <graydon> oh my goodness! I should have set a google alert on that term; I spent some time looking for something viable earlier in the year
<stellar-slack> <donovan> It is HUGE: https://github.com/awslabs/aws-sdk-cpp
<stellar-slack> <graydon> AWS and google both tend to auto-generate bindings. I would expect they spent 99% of their time writing a viable bindings-generator then hit "bind to everything"
<stellar-slack> <graydon> it looks pretty machine-written
<stellar-slack> <donovan> Yep, lots of XML… Basically a glorified wrapper around curl and WinHTTP:
<stellar-slack> <graydon> it wouldn't surprise me if it's subsettable
<stellar-slack> <graydon> like we might be able to pull out the s3 and http parts
<stellar-slack> <graydon> that's actually quite a .. change in the landscape, as far as considering dependencies (in my mind)
<stellar-slack> <graydon> a lot of things speak s3 protocol
<stellar-slack> <graydon> I wonder if this thing can speak to any non-aws s3 providers
<stellar-slack> <graydon> thanks for the link! I'll need to study some :)
<stellar-slack> <donovan> Well, it does tie stellar (mostly) to AWS… I guess Amazon need a China data centre!
<stellar-slack> <graydon> it's not just that. I would not be super comfortable tying us to amazon-the-company. s3 is a sufficiently standard way of doing blob storage, but I'd like at least a few of our history mirrors to be on non-amazon systems. gcs and azure are both solid. rackspace offers it. and softlayer/IBM. and HP. and centurylink. and verizon.
<stellar-slack> <scott> deploying horizon for testnet, there’ll be a hiccup in availability
<stellar-slack> <scott> deploying horizon for the live network
pixelbeat has quit [Ping timeout: 255 seconds]
<stellar-slack> <buhrmi> fund meeee GDVXG2FMFFSUMMMBIUEMWPZAIU2FNCH7QNGJMWRXRD6K5FZK5KJS4DDR
pixelbeat has joined #stellar-dev
pixelbeat has quit [Ping timeout: 240 seconds]
pixelbeat has joined #stellar-dev
loglaunch has quit [Ping timeout: 255 seconds]
loglaunch has joined #stellar-dev
pixelbeat has quit [Ping timeout: 264 seconds]
graydon has joined #stellar-dev