<stellar-slack>
<sacarlson> cool but with 20 you can't add any lines of credit you need 30 I think
<stellar-slack>
<sacarlson> 20 is min so you can't even send a native transaction
<stellar-slack>
<buhrmi> its 200,000,000 / Stellar::ONE = 20
<stellar-slack>
<sacarlson> well yes you need a min 200,000,000 then
<stellar-slack>
<buhrmi> it's the root account lol
<stellar-slack>
<sacarlson> oh so the root account master is still open?
sacarlson has left #stellar-dev [#stellar-dev]
<stellar-slack>
<sacarlson> I looked to see if I could bring up live-net with my stellar-core but I fail to find the #[HISTORY.h1] locations for live in any docs so far
<stellar-slack>
<lab> i'm sorry, the root account is disabled by me. :)
<stellar-slack>
<sacarlson> I hope you took some coins first
<stellar-slack>
<lab> yes, 100 billion Lumens..
<stellar-slack>
<sacarlson> good send me 10k later then
<stellar-slack>
<lab> if i had some. i will donate those lumens to SDF. :)
<stellar-slack>
<lab> otherwise they won't continue developing stellar-core..
<stellar-slack>
<buhrmi> so who controls the address with the billions of lumens? i cant do anything
<stellar-slack>
<sacarlson> my guess is they are doing preliminary tests on it before they start distribution
<stellar-slack>
<sacarlson> but never hurts to ask buhrmi
TheSeven has quit [Disconnected by services]
[7] has joined #stellar-dev
nivah has quit [Ping timeout: 246 seconds]
<stellar-slack>
<sacarlson> I'm not sure this makes sense but I would like some ideas of how to setup the infrastructure for a dynamic multi sign account to allow automated joining and exiting from a joint account
<stellar-slack>
<sacarlson> this is the framework I was thinking of so far but I don't think it will work as stated
<stellar-slack>
<sacarlson> this would allow my poker players to enter a game that is already in play or exit a game that continues with safe transition of entry and exit from a joint multi signed account
<stellar-slack>
<hsxuif> @jed: but access to that link requires admin privilege o:)
<stellar-slack>
<hsxuif> seems so, not quite sure
<stellar-slack>
<sacarlson> hsxuif: oh you want to run stellar-core? I'm not totally sure what the diff from node or validator is but to start you need to install stellar-core
<stellar-slack>
<sacarlson> cool thanks buhrmi I'm now hooking in
<stellar-slack>
<sacarlson> I'm using vpn so I'll be looking like I'm in singapoor
<stellar-slack>
<sacarlson> I got sync and show 8 peers why don't I see the peer list on your server buhrmi ?
<stellar-slack>
<buhrmi> cause i broke the peer list, let me redeploy
<stellar-slack>
<sacarlson> ok
<stellar-slack>
<sacarlson> I failed to enter a database so it came up but no data to look at. I didn't even know it would run without at least some database like sqlite
<stellar-slack>
<buhrmi> peer list is back
<stellar-slack>
<sacarlson> I show that there are 6 accounts
sacarlson has joined #stellar-dev
<stellar-slack>
<lab> right
<stellar-slack>
<lab> biggest is sdf, big but not biggest is helpsdf..
<stellar-slack>
<lab> others is mine...
<stellar-slack>
<lab> :)
<stellar-slack>
<lab> rocket is already refueled..
<stellar-slack>
<buhrmi> can u fund me GDVXG2FMFFSUMMMBIUEMWPZAIU2FNCH7QNGJMWRXRD6K5FZK5KJS4DDR
<stellar-slack>
<sacarlson> to prove you own it change your home_domain to lab :grinning:
<stellar-slack>
<lab> :)
<stellar-slack>
<sacarlson> otherwise how will we know what to point the inflation money to?
<stellar-slack>
<lab> buhrmi: sdf's accounts is not setup well, after that they will fund us
<stellar-slack>
<buhrmi> smooth launch so far
<stellar-slack>
<eno> my validator runs well
<stellar-slack>
<sacarlson> my validator will crash within 24 hours when my ip address is changed by my ISP
<stellar-slack>
<sacarlson> shouldn't say crash, it just locks up in non sync state
de_henne has quit [Remote host closed the connection]
<stellar-slack>
<donovan> Looks like I need 11 XLM :)
<stellar-slack>
<donovan> Oops, 21 XLM!
<stellar-slack>
<sacarlson> and my hopeful stellar address if anyone needs to know: GDW3CNKSP5AOTDQ2YCKNGC6L65CE4JDX3JS5BV427OB54HCF2J4PUEVG
<stellar-slack>
<lab> tip me: GAS2FDJIROHCJDM43TKDOPDSCYMVPGMULGF42QR65FINKVXHNDJTJC6E
<stellar-slack>
<sacarlson> donovan: and more if you hope to have any added signers on your account they cost an aditional 10 XLM each as far as I know
<stellar-slack>
<donovan> Next stop, vanity account generator :)
<stellar-slack>
<lab> 21 is not enough, fee is expansive.
<stellar-slack>
<donovan> Who’s the lucky `GAENIE5LBJIXLMJIAJ7225IUPA6CX7EGHUXRX5FLCZFFAQSG2ZUYSWFK`?
<stellar-slack>
<lab> fyi: Keypair.random in js stellar base runs at ~200/s
<stellar-slack>
<sacarlson> lab the fee is only expesive for people using the js sdk that cost 10X more than it does for ruby
<stellar-slack>
<lab> helpsdf, donovan
<stellar-slack>
<donovan> @lab, not sure I’ll be using js :)
<stellar-slack>
<lab> ruby code is unchanged.
<stellar-slack>
<lab> i was told is about 2k/s in go
<stellar-slack>
<sacarlson> there is a bug in js sdk that charges 1000 stroops instead of 100 stroops
<stellar-slack>
<donovan> I’m sure stellar doesn’t have tx ordering issues though :)
<stellar-slack>
<lab> sacarlson, i see, i didn't use js sdk but js base
<stellar-slack>
<sacarlson> yes I think the bug is in js.base I should assume
<stellar-slack>
<sacarlson> but js sdk uses js.base so ...
<stellar-slack>
<lab> tx ordering...did any one really made profit in front running?
<stellar-slack>
<donovan> Well, I made plenty of profit from being the first arbitrageur in a lot of ledgers. There was one account that looked like it might be front-running...
<stellar-slack>
<lab> i wanted to make a expoint months ago, but i thought there may be many competitor and abandoned
<stellar-slack>
<donovan> To be honest, spinning the tx hashes was the easy bit. Correctly calculating the offer funding and checking all the account flags was the hard bit :)
<stellar-slack>
<donovan> Also, working out best paths from all possible paths in a small amount of time.
<stellar-slack>
<lab> there was many deep dump/pump in ripple
<stellar-slack>
<donovan> Yup.
<stellar-slack>
<lab> front running should be profitable.
<stellar-slack>
<donovan> Well, they’ve finally pushed a fix.
<stellar-slack>
<sacarlson> hsxuif: yes your commit looks good it's only got one small change in a doc file over the main release
<stellar-slack>
<donovan> ```NETWORK_PASSPHRASE="Public Global Stellar Network ; September 2015”``` Looks different
<stellar-slack>
<dzham> horizon doesn’t generate paths yet, no? so you’d still have to generate them manually anyway
<stellar-slack>
<donovan> @dzham: Just want to look at an example path.
<stellar-slack>
<dzham> gotcha
<stellar-slack>
<sacarlson> that's the live nework passphrase
<stellar-slack>
<donovan> @sacarlson: That’s what I’m running on :)
<stellar-slack>
<dzham> js-stellar-sdk still doesn’t work for me
<stellar-slack>
<sacarlson> but nothing to play with on live yet unless you have some funds
<stellar-slack>
<donovan> I just thought that might be a problem with @hsxuif config… Who knows, I’m just a newbie :)
<stellar-slack>
<sacarlson> I gave hsxuif a testnet config assuming that's what he wanted
<stellar-slack>
<hsxuif> @donovan: also a newbie here, but thanks very much
<stellar-slack>
<hsxuif> @sacarlson: guess that i can use testnest cfg for now
<stellar-slack>
<hsxuif> thank u both
<stellar-slack>
<sacarlson> ya you can do more with it at this time anyway
<stellar-slack>
<hsxuif> just haven't got many ideas how to configure that, :S
stellar-slack has quit [Remote host closed the connection]
stellar-slack has joined #stellar-dev
<stellar-slack>
<sacarlson> hsxuif: I sent a copy of my config didn't you get it?
<stellar-slack>
<donovan> I think there are too many things to configure. Sensible defaults with only the `QUORUM_SET`,`NODE_SEED` and `NETWORK_PASSPHRASE` and needing to be configured would be ideal. Would be good if there was some kind of DNS bootstrapping for `KNOWN_PEERS` and all the history stuff. If all the files lived under a default directory as well would save some pain.
<stellar-slack>
<donovan> stellar-core should validate the config fully at startup as well, rather than when catchup starts.
<stellar-slack>
<donovan> Is mine. Some of the history curl commands didn’t work so had to comment them out.
<stellar-slack>
<hsxuif> @donovan: yeah, the validation should be much earlier
<stellar-slack>
<hsxuif> thanks for sharing
<stellar-slack>
<sacarlson> just plug them all in and try them all
<stellar-slack>
<sacarlson> the last 2 look like they should but donovan's is for live net so you can't play with any transactions with it
<stellar-slack>
<sacarlson> but you can view the database to see what activity you see but that can be seen here: http://stellar-core.org/
<stellar-slack>
<sacarlson> almost by beer drinking hour so last chance to ask me any questions. the main shift will be here soon with better answers than me
<stellar-slack>
<sacarlson> so what you had to hit ctrl-c 2 times?
<stellar-slack>
<hsxuif> no, only `kill -9` solves
<stellar-slack>
<dzham> can anyone submit transactions to horizon? what format are you POSTing in, in that case?
<stellar-slack>
<sacarlson> I'm not sure you guys have the needed scripts to fully reset and normaly start stellar-core that I guess I should share. oh I don't recall having to do that before
<stellar-slack>
<hsxuif> @donovan: tried your conf, and it seems working
<stellar-slack>
<bartek> @donovan: you can set `CATCHUP_COMPLETE=true` in your config and your node will perform a complete catchup
<stellar-slack>
<donovan> @bartek: I have that set, but it doesn’t start.
<stellar-slack>
<bartek> there is no info about a catchup in stellar-core logs?
<stellar-slack>
<donovan> No, the usual stuff has stopped appearing.
<stellar-slack>
<eno> my validator lost connect. what i can see is dropping , handshake, connect ,read timeout , even after a reboot.
<stellar-slack>
<scott> @donovan: does the last line of output appear to be truncated? I’m having an issue where it appears that one of the forked processes using in acquiring history is mucking with stdout.
<stellar-slack>
<donovan> @scott I get the feeling that using system utilites and curl,aws, osscmd, etc. is a bit fragile with regards to getting history. How are the timeouts being handled without using a native C++ client?
<stellar-slack>
<scott> that’d be something to ask @monsieurnicolas or @graydon. I don’t have the experience with that code base to properly answer
<stellar-slack>
<eno> so many version in peers list : v0.1 "v0.1-1-g51e297a" "59a2dea" "d3adf91"
<stellar-slack>
<donovan> @nelisky: Have you added your own patches to the tagged release?
<stellar-slack>
<lab> donovan, did you get ledger seq=14605?
<stellar-slack>
<eno> stuck at the 14606
<stellar-slack>
<donovan> @lab, no idea :) I’ve already done a newdb :)
<stellar-slack>
<lab> anyone else get this in log ? Got consensus: [seq=14605, prev=61ba9b, tx_count=0, sv: [ txH: 470fb4, ct: 1443706755, upgrades: [ ] ]]
<stellar-slack>
<lab> i will add your node_id to my quorum slice
<stellar-slack>
<lab> please check
<stellar-slack>
<lab> donova, it's a pitty
<stellar-slack>
<lab> we need same LCL to forcescp to restart
<stellar-slack>
<donovan> I think it’s much better to avoid the “missing ledgers fiasco” of another network and only start the live network when these type of bugs have been ironed out :)
<stellar-slack>
<lab> thanks scp, the ledgers is not missing :)
<stellar-slack>
<lab> it's also a change to upgrade my quorum set to 7 :)
<stellar-slack>
<hsxuif> @lab: can u share your validators?
<stellar-slack>
<lab> glad to have one more failure safety
<stellar-slack>
<nelisky> @lab update and restart on our side?
<stellar-slack>
<lab> just ctrl-x to stellar-core and
<stellar-slack>
<lab> run "stellar-core --forcescp && stellar-core"
<stellar-slack>
<lab> @nelisky: @eno: @buhrmi , here we go
<stellar-slack>
<lab> need 5 force to start
<stellar-slack>
<lab> ctrl-c
<stellar-slack>
<nelisky> right...
<stellar-slack>
<eno> 2015-10-01T11:03:38.100 1cedae [] [Herder] INFO Lost track of consensus
<stellar-slack>
<lab> it's ok, just wait
<stellar-slack>
<nelisky> not much has changed from my point of view :)
<stellar-slack>
<eno> same
<stellar-slack>
<lab> wait me
<stellar-slack>
<lab> my nodes restarted with forcescp, did you? @nelisky @eno @buhrmi
<stellar-slack>
<nelisky> @lab, yes, as instructed
<stellar-slack>
<graydon> donovan: the downloading thing retries on failure. it does not time out a given download attempt, though I suppose I could add timeouts to the process-runner. I believe TCP connection timeouts will occur on a connect failure; I'm not sure if there's a corner case in there where a very slow HTTP transaction could suspend it indefinitely.
<stellar-slack>
<graydon> donovan: however the subprocess runner also runs several processes in parallel (configurable, default is 32) so if 1 or 2 hang for quite a while, the other 30-or-so will carry on chewing through downloads
<stellar-slack>
<graydon> donovan: in practice we haven't seen problems around timeouts, but I'm happy to tweak this / try to improve it
<stellar-slack>
<donovan> @graydon: I manually entered some of the curl commands for the CDN’s based in China and they did block for quite some time.
<stellar-slack>
<graydon> interesting. so .. as a short-term workaround, curl itself has a command-line timeout you can set
<stellar-slack>
<donovan> I guess the point is that’s it quite hard to get cause of failure when using subprocesses, without parsing text. A HTTP client would be a bit more informative for marking bad URL’s.
<stellar-slack>
<graydon> I suspect if we did this at the process-runner level, we'd need a timeout that increased with retries, in case it killed an in-process-but-kinda-slow download. it's hard to differentiate those.
<stellar-slack>
<graydon> that's true. I guess it would be possible to build in an HTTP client as well, I'm just hesitant about the expansion of surface area (code-wise, testing, attack surface, interaction with our event loop, etc.)
<stellar-slack>
<donovan> Another option is to just deliver history through an XDR message...
<stellar-slack>
<graydon> (I mean technically we _have_ an HTTP client built in already, it's just far too minimal -- and synchronous -- to use at any scale. it's only used for injecting commands into running stellar-core processes' HTTP interfaces)
<stellar-slack>
<graydon> near the beginning of the rewrite we wrestled with that a lot (whether to send history over the p2p layer)
<stellar-slack>
<graydon> I argued a lot then, and feel pretty similarly now, that it's better to go _via_ a history archive so that we ensure the archives always exist
<stellar-slack>
<graydon> keeping in mind that history isn't kept anywhere on the node. if the archives fail, history's gone.
<stellar-slack>
<donovan> It’s great that history is available over HTTP for analysis and processing, I just get a bit nervous about all the configuration to make it work. I made lots of mistakes getting my config file correct.
<stellar-slack>
<donovan> Maybe that says more about me :)
<stellar-slack>
<graydon> history has much higher long-term availability requirements, and much lower I/O requirements, than the live node data. so I think it makes sense to keep it off-node. I hear what you're saying about config files. not sure what to do about it, is all.
<stellar-slack>
<graydon> HTTP built in would help for the get-side
<stellar-slack>
<graydon> for the put-side, people have a lot of ways they're going to want to publish data
<stellar-slack>
<graydon> if it were easier to diagnose failures in it, would that help?
<stellar-slack>
<graydon> like --newhist is sufficient for debugging, but that's not the most obvious
<stellar-slack>
<scott> Seems like we should publish a config file with a history config pointing to SDF’s archive
<stellar-slack>
<donovan> RE: config, would be good to just set a BASE_DIR and all directories default to subdirectories of that directory. If someone wants to override it they could, but sensible defaults is always a winner.
<stellar-slack>
<graydon> maybe something like --test-history
<stellar-slack>
<donovan> OPTIONS is a good way of testing HTTP endpoints
<stellar-slack>
<graydon> donovan: you mean rather than TMP and BUCKETS? there's a bug open on that, yeah. I mean to get around to it. hasn't been high enough priority yet, lessee.. https://github.com/stellar/stellar-core/issues/367
<stellar-slack>
<graydon> donovan: well we only have the two commands in config, get and put. but we could test get commands by getting the .well-known/stellar-history.json file from an archive.
<stellar-slack>
<graydon> testing put commands is trickier since we have no delete command. no way to put a random temp file then delete it
<stellar-slack>
<donovan> I guess the config validation could be just checking that all named directories are writable (with a quickly deleted testfile) at startup rather than when the catchup process tries to put the file in the directory.
<stellar-slack>
<donovan> But yeah, for HTTP upload, it’s all a bit more complicated...
<stellar-slack>
<donovan> The joys of Go, all the HTTP and platform-independent OS stuff is in the standard library :)
<stellar-slack>
<graydon> well, HTTP "upload" isn't really enough to put resources on most places people want to put them
<stellar-slack>
<graydon> if we had reasonable C++ client libs for things-that-speak-S3 it might be close enough. that and maybe sftp. but as it stands, well ...
<stellar-slack>
<graydon> C++ doesn't get a lot of love in the web-services space
<stellar-slack>
<graydon> oh my goodness! I should have set a google alert on that term; I spent some time looking for something viable earlier in the year
<stellar-slack>
<graydon> AWS and google both tend to auto-generate bindings. I would expect they spent 99% of their time writing a viable bindings-generator then hit "bind to everything"
<stellar-slack>
<graydon> it looks pretty machine-written
<stellar-slack>
<donovan> Yep, lots of XML… Basically a glorified wrapper around curl and WinHTTP:
<stellar-slack>
<graydon> it wouldn't surprise me if it's subsettable
<stellar-slack>
<graydon> like we might be able to pull out the s3 and http parts
<stellar-slack>
<graydon> that's actually quite a .. change in the landscape, as far as considering dependencies (in my mind)
<stellar-slack>
<graydon> a lot of things speak s3 protocol
<stellar-slack>
<graydon> I wonder if this thing can speak to any non-aws s3 providers
<stellar-slack>
<graydon> thanks for the link! I'll need to study some :)
<stellar-slack>
<donovan> Well, it does tie stellar (mostly) to AWS… I guess Amazon need a China data centre!
<stellar-slack>
<graydon> it's not just that. I would not be super comfortable tying us to amazon-the-company. s3 is a sufficiently standard way of doing blob storage, but I'd like at least a few of our history mirrors to be on non-amazon systems. gcs and azure are both solid. rackspace offers it. and softlayer/IBM. and HP. and centurylink. and verizon.
<stellar-slack>
<scott> deploying horizon for testnet, there’ll be a hiccup in availability
<stellar-slack>
<scott> deploying horizon for the live network
pixelbeat has quit [Ping timeout: 255 seconds]
<stellar-slack>
<buhrmi> fund meeee GDVXG2FMFFSUMMMBIUEMWPZAIU2FNCH7QNGJMWRXRD6K5FZK5KJS4DDR