pixelbeat_ has joined #stellar-dev
irisli has quit [Quit: Connection closed for inactivity]
de_henne_ has joined #stellar-dev
de_henne has quit [Ping timeout: 264 seconds]
sv-logger has quit [Remote host closed the connection]
sv-logger has joined #stellar-dev
Anduck has quit [Ping timeout: 246 seconds]
Anduck has joined #stellar-dev
TheSeven has quit [Ping timeout: 246 seconds]
TheSeven has joined #stellar-dev
pixelbeat_ has quit [Ping timeout: 248 seconds]
mafs has quit [Ping timeout: 248 seconds]
mafs has joined #stellar-dev
ggherdov` has quit [Ping timeout: 264 seconds]
edubai_ has quit [Ping timeout: 240 seconds]
termos has quit [Ping timeout: 256 seconds]
jbenet has quit [Ping timeout: 248 seconds]
proppy has quit [Read error: Connection reset by peer]
ggherdov` has joined #stellar-dev
jbenet has joined #stellar-dev
<sacarlson> after my nap I took a look at js-stellar-lib about how to obtain sequence-numbers to find at least in that example java with server.loadAccount("gspbx....
termos has joined #stellar-dev
<sacarlson> but my search in ruby-stellar-base fails to find anything equivalent to obtain the needed next sequence-number in any of the code or examples any clue @scott or others?
de_henne_ has quit [Remote host closed the connection]
de_henne has joined #stellar-dev
<stellar-slack> <monsieurnicolas> base is too low level: you need to call into Horizon via ruby-stellar-lib see examples https://github.com/stellar/ruby-stellar-lib/tree/master/examples ; the sequence number should automatically be populated by the library (as it's getting it from the "accounts" endpoint). I don't know what the state of this library is though (I don't even know how to read .rb :) ).
edubai_ has joined #stellar-dev
<sacarlson> I originaly tried ruby-stellar-lib but failed, I've gained more progress at this level with functionality in ruby
<sacarlson> but I can take a look at that level to see if they have made any changes
<sacarlson> whatever they don't already have working in ruby I will add myself
proppy has joined #stellar-dev
<sacarlson> it still looks to have not been changed for over 30 days https://github.com/stellar/ruby-stellar-lib
<sacarlson> with this at the top: STATUS: this library is very early and incomplete. The examples provided do not work, yet
<sacarlson> I however tried it without first reading that to find out for myself it didn't work ha ha
koolhead17 has joined #stellar-dev
koolhead17 has quit []
<sacarlson> just to be sure I ran one example again:
<sacarlson> 01_get_funded.rb:4:in `<main>': uninitialized constant Stellar::Account (NameError)
<stellar-slack> <meetreks> sacarlson
<stellar-slack> <meetreks> is this for the new core
<stellar-slack> <meetreks> which is replacing stellard?
<sacarlson> yes I'm prepareing to upgrade my software to the new stellar-core that will provide me with added escrow features
<sacarlson> if all you need is exchange between currency and send money the old system works fine
<sacarlson> does that answer your question @meetreks ?
sacarlson1 has joined #stellar-dev
sacarlson has quit [Ping timeout: 246 seconds]
pixelbeat_ has joined #stellar-dev
sacarlson1 has quit [Quit: Leaving.]
sacarlson has joined #stellar-dev
pixelbeat_ has quit [Ping timeout: 244 seconds]
snufdehond has joined #stellar-dev
snufdehond has left #stellar-dev [#stellar-dev]
nelisky has joined #stellar-dev
pixelbeat_ has joined #stellar-dev
<stellar-slack> <fredolafritte> In the package.json of interstellar-network-widgets, the git repo points to the interstellar-network, it seems a cut and paste error, doesn't it ?
pixelbeat_ has quit [Ping timeout: 255 seconds]
<stellar-slack> <fredolafritte> also, interstellar-ui-messages is not yet added to github
<stellar-slack> <naumenkogs> I'm still trying to run isolated network of 2 PC's via stellar-core and now I'm getting txBAD_SEQ or account not exists while running load tests (localhost:port/generateload)
<stellar-slack> <naumenkogs> Do you have any idea about why I'm getting such errors? Btw some transactions are fine.
<stellar-slack> <naumenkogs> And if I manually select * from accounts after some seconds of load testing I get 260 accounts
pixelbeat_ has joined #stellar-dev
<stellar-slack> <bartek> @fredolafritte fixed. thanks! when it comes to `interstellar-ui-messages` thanks for pointing it out. I finished the first version of this module this week. I ask someone to open this repo when SF wakes up (I don’t have permission unfortunatelly)
<stellar-slack> <bartek> @sacarlson if you are making requests to horizon without using ruby-stellar-lib just GET account, like this: https://horizon-testnet.stellar.org/accounts/gWRYUerEKuz53tstxEuR3NCkiQDcV4wzFHmvLnZmj7PUqxW2wn (my account) and sequence number is in `sequence` field.
<stellar-slack> <bartek> basically, the main (and only?) reason sequence numbers are required is to prevent replay attacks: https://en.wikipedia.org/wiki/Replay_attack
<sacarlson> @bartek ya that's perfect just what I needed. I can write a ruby program that resolves the sequence out of that and complete the transactions with what I have now
<sacarlson> so for now I'm going to skip using ruby-stellar-lib that are not working yet and just use ruby-stellar-base that seem to work so far for example https://github.com/stellar/ruby-stellar-base/blob/master/examples/mid_level_transaction_post.rb#L23-L28
nelisky has quit [Ping timeout: 265 seconds]
<sacarlson> I should have my own working plug and play replacement ruby lib to allow me to move from my present lib that supports live.stellar.org and testnet.stellar.org to your new horizon-testnet.stellar.org in about 2 days or less if I'm lucky
Kwelstr has quit [Ping timeout: 248 seconds]
Kwelstr has joined #stellar-dev
<sacarlson> I was able to create the ruby program to resolve a sequence number from an account number with no problem
<sacarlson> but failed to find a way to extract the account number in the format needed from the pair structure created from friendbot = Stellar::KeyPair.from_seed("sfyjodTxbwLtRToZvi6yQ1KnpZriwTJ7n6nrASFR6goRviCU3Ff")
<stellar-slack> <bartek> you want to create KeyPair object using the account address?
<sacarlson> no I want to extract account from keypair as I thought this would do friendbot.public_key.value
<sacarlson> but that returns some strange binary or ???
<sacarlson> is this a scott thing?
<sacarlson> maybe I need the value converted to base58 or whatever it is
<stellar-slack> <bartek> scott created ruby-stellar-base but I think I can help too. but I don’t understand what you want to accomplish. what do you mean by extracting account from keypair?
<stellar-slack> <bartek> oh, maybe you simply want to get account address from keypair? does this help: `key_pair.address`?
<sacarlson> I want a number in this format gWRYUerEKuz53tstxEuR3NCkiQDcV4wzFHmvLnZmj7PUqxW2wn from the seed number sfyjodTxbwLtRToZvi6yQ1KnpZriwTJ7n6nrASFR6goRviCU3Ff this that should return an account I guess
<sacarlson> I'll try it
<sacarlson> bartek ya that looks to be working so I"m don
<sacarlson> ahead of schedual by 2 days
<stellar-slack> <bartek> congrats ;)
<sacarlson> time for some beer.... thanks bartek
<stellar-slack> <bartek> @fredolafritte: https://github.com/stellar/interstellar-ui-messages has been published. happy testing!
<stellar-slack> <fredolafritte> thanks for being such reactive
<stellar-slack> <scott> @naumenkogs: the load testing infrastructure, to my understanding (@graydon can correct me, I bet), does not try particularly hard to produce 100% successful transactions. It’s simply causing a stellar-core to do a bunch of work, putting strain on the various subsystems to see if things break. You would usually run a test alongside the load generation and whose success or failure is not dependent upon the
<stellar-slack> <naumenkogs> Can you help me a bit more with /generateload? I have 2 connected nodes of my isolated stellar-core network and when I run load-tests the second node cannot obtain accounts, but it is still validating ledgers.
<stellar-slack> <naumenkogs> By the way sometimes I see a problem connected with saving history. Something like 2015-07-16T18:47:26.694 7eb7f3 [140224629217152] [History] WARN failed to get .well-known/stellar-history.json from history archive 'vs'
<stellar-slack> <naumenkogs> Could it be connected with load-tests problems?
<stellar-slack> <naumenkogs> I dunno if @scott or @graydon can help me in that
<stellar-slack> <scott> yeah, I can’t help much with that… it would need to be one of the core devs. I have very little knowledge of the load generation system
<stellar-slack> <graydon> hey sorry for being late, reading
<stellar-slack> <graydon> naumenkogs: can you describe "second node cannot obtain accounts" more precisely?
<stellar-slack> <graydon> the load test should not prevent normal function, though it may be overloading the network you're using
<stellar-slack> <graydon> the warning concerning inability to fetch stellar-history.json suggests an error in your configuration of history archives
<stellar-slack> <graydon> have you configured a history archive? can you perhaps show me the stellar-core.cfg file you're using?
<stellar-slack> <naumenkogs> @graydon: in that bottom part of config where the history is located I havent changed anything actually. I use an example for that part.
<stellar-slack> <naumenkogs> I even leave commented all of the history blocks but first
<stellar-slack> <naumenkogs> And for the first question - do u mean I cannot use load-tests for a network of 2 PCs?
<stellar-slack> <graydon> ok, so that example file uses a local history archive consisting of just copying files to and from the archive. if you want two machines to be able to publish and catch up with one another, they need some shared history archive to write to and read from
<stellar-slack> <graydon> you can do a load test of 2 machines but they need to be in consensus for the load on one to affect the other
<stellar-slack> <naumenkogs> For sure, one machine starts /generateload. But the second one always says there is no accounts, for example. If I check it with a simple select from accounts;
<stellar-slack> <naumenkogs> And. Can you give me a link to read about shared history?
<stellar-slack> <graydon> https://github.com/stellar/stellar-core/blob/master/src/history/readme.md is probably the best we have at the moment
<stellar-slack> <graydon> I know we need to write better docs about the overall network architecture. it's on my radar for "soon" but we're also trying to get things running in production..
<stellar-slack> <graydon> can I see the config file you're running with? it's unclear to me whether your two nodes are operating in consensus or independently.
<stellar-slack> <graydon> if they're independent, there will be no flow of transactions between the one doing the load test and the other
<stellar-slack> <graydon> if they're in consensus, the load test should affect both nodes
pixelbeat_ has quit [Ping timeout: 250 seconds]
<stellar-slack> <naumenkogs> I have a config of only one node right now so it is easier to describe by words.
<stellar-slack> <naumenkogs> For example. I have one listener-node and one validator-node. They both are a network, where there is a same sequence of ledgers. I see "ledger closed" with the same hash on both monitors
<stellar-slack> <naumenkogs> But. If I do "select * from accounts" on the listener-node I get only root account while on validator-node I get 1000 records for example
<stellar-slack> <naumenkogs> Maybe the problem is in history, but I'm not sure.
<stellar-slack> <graydon> what mode is the listener in? synced, catching up?
<stellar-slack> <graydon> (if you query it with /info )
<stellar-slack> <naumenkogs> I cannot check it right now. I must say that they're started the same time. I saw messages like "catching up", but in most cases there is no such messages
<stellar-slack> <graydon> hm, ok. it is definitely possible to run a multi-node load test -- it's part of our acceptance testing framework to do so, which I'm currently working on -- but this is where I'd start: check to see if the nodes are in consensus and if the listener is in sync.
<stellar-slack> <graydon> or rather, I guess there's no "consensus" in your network since you're having the listener blindly follow the leader
<stellar-slack> <graydon> in which case, yeah, probably it's just failing to catch up in the first place, and buffering ledger closes while trying to catch up
<stellar-slack> <graydon> catching up requires access to a shared history archive
<stellar-slack> <naumenkogs> I tried to run TRESHOLD=2 with 2 validators so I must say they can be in consensus. Because they were closing ledgers
<stellar-slack> <graydon> ok. so with 2 validators and THRESHOLD=2 a load test on one should definitely cause transactions on the other
<stellar-slack> <graydon> since they will not close any ledgers at all unless in lock-step
<stellar-slack> <naumenkogs> So you are sure that the problem is in shared history things?
<stellar-slack> <graydon> I am not sure. it would help if I could see the configs you're using and the logs they generate.
<stellar-slack> <graydon> otherwise I'm just operating from descriptions, and those descriptions are changing. you've described two networks now, one with a single validator and single follower, and one with two validators.
<stellar-slack> <naumenkogs> But in both cases both nodes must know the accounts in network
<stellar-slack> <graydon> I don't want to be critical, just understand it's hard for me to diagnose without details
<stellar-slack> <graydon> the accounts in the network are generated during load testing
<stellar-slack> <graydon> with a single validator and single follower, the validator can rush ahead and create a lot of accounts while the follower fails to catch up
<stellar-slack> <graydon> in that case the validator will see many accounts and the follower will see none
<stellar-slack> <graydon> with a pair of validators and THRESHOLD=2, the validators should not proceed to close a single ledger until they have consensus on the transactions to include, so they should have the same set of accounts at any given time (+/- momentary desynchronization of their sql COMMIT statements)
<stellar-slack> <naumenkogs> I dont want to be annoying and thank you for response. The problem is I have 21:29 now so I dont have access to the second node
<stellar-slack> <naumenkogs> Can you say a bit more about history? The [HISTORY.vs] block is not enough to save it?
<stellar-slack> <graydon> sure. we can return to this later, I understand if you're unable to get logs presently. I just ask you to be equally understanding that it limits my ability to give accurate / necessarily-correct advice :)
<stellar-slack> <graydon> history archives are where all information about past-state is stored
<stellar-slack> <graydon> we only store the present state in the local server's SQL tables
<stellar-slack> <graydon> so when node X tries to catch up to node Y, it asks node Y about the present and then goes to try to reconstruct some portion of the past (typically the most-recent checkpoint of each of several incrementally-written layers of state)
<stellar-slack> <graydon> if node X cannot reach the history archive node Y deposited its history in, there will be no catchup
<stellar-slack> <graydon> the [HISTORY.vs] block is an example, and .. probably not a very good one
<stellar-slack> <graydon> the problem with us giving a concrete example in that file is that we can't really give an example that shows how you're going to use it because each user will have to decide this part for themselves
<stellar-slack> <graydon> (how to store history -- in a local web server or on a cloud provider like AWS, GCS or Azure)
<stellar-slack> <graydon> the system is, in a sense, not standalone. it's also not standalone when it comes to how you access postgresql. you have to set one up yourself.
<stellar-slack> <graydon> we could try to bolt a small webserver on to the process for getting-started purposes -- the same way we bolt a copy of sqlite on to it -- but at present we only provide an example that involves simply _copying_ the history files (using 'cp') on the local filesystem into and out of a "history archive" that is a simple directory
<stellar-slack> <graydon> which works, but it only works if all nodes share access to that directory
<stellar-slack> <graydon> on disk, on their local filesystem somehow
<stellar-slack> <naumenkogs> I got it now.
<stellar-slack> <naumenkogs> Thank you for paying time
kushed_AFK has joined #stellar-dev
kushed has quit [Ping timeout: 256 seconds]
kushed_AFK is now known as kushed
graydon has joined #stellar-dev
pixelbeat_ has joined #stellar-dev
pixelbeat_ has quit [Ping timeout: 260 seconds]
graydon has quit [Quit: Leaving.]
stellar-slack has quit [Remote host closed the connection]
stellar-slack has joined #stellar-dev
<stellar-slack> <matschaffer> FYI: stellar-core testnet just had a history reset and upgrade to 5f94fa0 - anyone peering will want to do the same