de_henne has quit [Remote host closed the connection]
de_henne has joined #stellar-dev
pixelbeat has joined #stellar-dev
stellar-slack has quit [Remote host closed the connection]
stellar-slack has joined #stellar-dev
<stellar-slack>
<lab> i started a network with 4 node quorum slice and 1 failure safty. nodes was start in order: node1(scp), node2(scp), node3(scp), node4
<stellar-slack>
<lab> then i stop node1.
<stellar-slack>
<lab> node2 met as network failure and lost track consensus. then node3 lost and node4 lost.
<stellar-slack>
<lab> but LCL of node is 144, and LCL of node3/node4 is 145.
<stellar-slack>
<lab> and i cann't restart the network without fully reset.
<stellar-slack>
<lab> i tried manualclose/replace db on node2, but didn't work.
<stellar-slack>
<lab> is it a fork? @jed @monsieurnicolas @matschaffer
<stellar-slack>
<jed> what is the LCL of node 1?
<stellar-slack>
<jed> That doesn't sound like a fork. It just sounds like you lost too many servers and SCP couldn't keep going
<stellar-slack>
<jed> you can start the network from that state
<stellar-slack>
<lab> lcl of node1 is 1 acturely. i restarted it from newdb and didn't synced yet.
<stellar-slack>
<jed> oh I meant before it was restarted
<stellar-slack>
<jed> so the way to get it back is copy the DB and the bucket dir from the guys on 145 to one of the other nodes
<stellar-slack>
<jed> then you forcescp on the 3 with 145
<stellar-slack>
<jed> this should start over from 145
<stellar-slack>
<lab> node1 is restart be for lcl 100
<stellar-slack>
<lab> before
<stellar-slack>
<lab> i tried copy db and bucket dir.
<stellar-slack>
<jed> hmm and what happens when you start it after copying?
pixelbeat has quit [Ping timeout: 250 seconds]
pixelbeat has joined #stellar-dev
<stellar-slack>
<lab> it just shows logs of overlay info
<stellar-slack>
<lab> i just tried copy db+buckets from node3 -> node1
<stellar-slack>
<jed> I mean when you go stellar-core --info
<stellar-slack>
<lab> then start node1 with forcescp, node1, node3, node4 come to 148 before i manually stopped node1
<stellar-slack>
<lab> 2015-09-30T00:21:44.046 a96c31 [] [default] INFO { "info" : { "build" : "klm-1.0", "ledger" : { "age" : 456, "closeTime" : 1443543248, "hash" : "a46967ac5e734f48adc6472d0cec9c81a457e570fa90320b0247c027fcd0d27c", "num" : 149 }, "network" : "KLM is a Kilo of xLM; Strllar is an awesome copycat Stellar.", "numPeers" : 2, "protocol_version" :
<stellar-slack>
<jed> The connection to the others drops?
<stellar-slack>
<lab> so, the number of node in china, should be less then failure safty
<stellar-slack>
<lab> the behavior of GFW is not quite predictable. it's route dependent, time dependent...
pixelbeat has quit [Ping timeout: 240 seconds]
<stellar-slack>
<buhrmi> the wall has to go. the wall has to go.
pixelbeat has joined #stellar-dev
nivah has joined #stellar-dev
pixelbeat has quit [Ping timeout: 260 seconds]
pixelbeat has joined #stellar-dev
<stellar-slack>
<jed> restarting testnet shortly
<stellar-slack>
<scott> the master branches of ruby/js/go base libraries are now updated to support the xdr as of stellar-core commit ad22bccafbbc14a358f05a989f7b95714dc9d4c6
<stellar-slack>
<graydon> scott: updated scc locally, seems to mostly work!
<stellar-slack>
<graydon> (except um, Failed to validate tx: 475a8fc56bee57f7a7144f506099e705dfb129f571266c4c74fdb7de069cc22e could not be found in txhistory table on process node0)
<stellar-slack>
<graydon> scott: any chance I need to adjust something else scc-side?
<stellar-slack>
<scott> graydon: Are you using ruby-stellar-base 0.6.0 or 0.6.1? 0.6.0 used fees that were too low
<stellar-slack>
<scott> The latest scc master should be using 0.6.1
<stellar-slack>
<graydon> 0.6.1
<stellar-slack>
<graydon> I just updated
<stellar-slack>
<graydon> I'll merge w/ master
<stellar-slack>
<graydon> sec
<stellar-slack>
<scott> I was able to run all the recipes from horizon without error, so I would expect your recipes to work.
<stellar-slack>
<graydon> not sure. txs are getting in and ledgers are closing but the one it is looking to verify isn't showing up
<stellar-slack>
<graydon> this is examples/simple_payment.rb
<stellar-slack>
<scott> oh weird! I was ran that fine locally at one point today, let me try again
<stellar-slack>
<graydon> (under docker, with a couple fixes to the .env file generation I'm about to push)
<stellar-slack>
<graydon> I'll try under local
<stellar-slack>
<graydon> same error locally
<stellar-slack>
<scott> let me dig around the scc code for a bit to see if something pops out. Updating my docker images now to try that as well
<stellar-slack>
<graydon> scott: maybe my image is behind. stick with what you're doing, it's probably on my end.
<stellar-slack>
<graydon> was just wondering if you thought it should work / if it rang a bell
<stellar-slack>
<graydon> "yes" and "no" is fine :)
<stellar-slack>
<scott> kk! If you need any help, I’ll be around for the next hour or so
<stellar-slack>
<graydon> ok
<stellar-slack>
<scott> shutting down horizon to do some maintenance. Will start it back up after stellar-core is updated
<stellar-slack>
<jed> testnet is down while we roll out the new version
Anduck has quit [Ping timeout: 246 seconds]
Anduck has joined #stellar-dev
<stellar-slack>
<jed> ok testnet is back up
<stellar-slack>
<scott> horizon-importer is back up and running. horizon is going to be down for about an hour while I fix up some last minute snafus