acetakwas has quit [Ping timeout: 256 seconds]
DomKM has joined #stellar-dev
de_henne_ has joined #stellar-dev
de_henne has quit [Ping timeout: 248 seconds]
acetakwas has joined #stellar-dev
acetakwas has quit [Read error: Connection reset by peer]
TheSeven has quit [Ping timeout: 252 seconds]
TheSeven has joined #stellar-dev
<stellar-slack> <armed10> So I've managed to set up a standalone server like we talked about yesterday. But the server still won't get to the SYNCED state unless we use --forcescp
acetakwas has joined #stellar-dev
acetakwas has quit [Max SendQ exceeded]
<stellar-slack> <armed10> I'm trying to reset the history and database to see if it will change anything
<stellar-slack> <sacarlson> I think --forcescp is always required for standalone. I would have to verify by looking at my scripts that did that
<stellar-slack> <sacarlson> yup that's what my script did --forcescp then run
<stellar-slack> <sacarlson> however I haven't ran standalone in some time
<stellar-slack> <armed10> algright, so standalone is working now
<stellar-slack> <armed10> thank you
DomKM has quit [Quit: Connection closed for inactivity]
<stellar-slack> <armed10> What steps are needed to get 2 nodes working and in sync?
<stellar-slack> <armed10> it seems to be dropping and reconnecting the same peer over and over
<stellar-slack> <sacarlson> you will need @jed or the other -core guys to help you setup configs for that. and this is not there time window of operation
<stellar-slack> <sacarlson> but I think it's meant to run with 4 at minimum
<stellar-slack> <armed10> That would be strange since there's a docker that runs 3
<stellar-slack> <sacarlson> this might help UNSAFE_QUORUM=true
<stellar-slack> <sacarlson> yes seems the default THRESHOLD_PERCENT=51 so with 2 of 3 would still be ok
<stellar-slack> <sacarlson> so crank THRESHOLD_PERCENT=48 I would think would work with 1 or 2
<stellar-slack> <armed10> Thanks, I'll try that
<stellar-slack> <armed10> Have you experienced a node trying to fetch a history file that does not exist?
<stellar-slack> <sacarlson> yes but only on testnet and live
<stellar-slack> <sacarlson> I'm not even sure how to setup saving history. I only setup as a watching node
<stellar-slack> <armed10> it's trying to fetch a non-existing history file on a second node over a local network
<stellar-slack> <armed10> I think because it's trying to catch up
<stellar-slack> <sacarlson> it gets stuck forever if it can't get the history
<stellar-slack> <armed10> yeah I've noticed :)
<stellar-slack> <sacarlson> with jed's and or the other -core guys you will have a working 2 set and 3 set in a few minutes with correct configs
<stellar-slack> <sacarlson> once working it's all the same as if your running standalone
<stellar-slack> <mvaneijk> have you tried the docker-stellar-core configuration?
<stellar-slack> <mvaneijk> it's doing exactly the same as you are doing but than automatically in docker containers
<stellar-slack> <armed10> well we haven't had much succes with docker, but their configuration should be helpful
<stellar-slack> <armed10> thank
<stellar-slack> <armed10> could you maybe share the config file your docker uses/
<stellar-slack> <raymens> can you post your config files?
<stellar-slack> <mvaneijk> you should create your own history store to make it work
<stellar-slack> <armed10> @raymens: are you asking me?
<stellar-slack> <armed10> it's probably just the history thing. The store of one node is not available to another
<stellar-slack> <mvaneijk> in docker the containers are connected and have disk level access
<stellar-slack> <mvaneijk> so they can see the history of other containers, hence the `cp history` parameter for `HISTORY_GET`
<stellar-slack> <mvaneijk> You should use something like the AWS file storage used by the stellar testnet
<stellar-slack> <raymens> @armed10 yup, are you running all of the nodes on 1 machine or multiple?
<stellar-slack> <armed10> no seperate machines
<stellar-slack> <armed10> I will create a remote history, it should fix the problem
<stellar-slack> <raymens> then you need to let them access each others history by using a shared store (AWS, FTP, Azure, etc.)
triblit has joined #stellar-dev
<stellar-slack> <armed10> Should all nodes write to the externally available history or just one?
<stellar-slack> <armed10> just got duplicate child work: put-snapshot-files
<stellar-slack> <raymens> they shouldn't write to the same history store
<stellar-slack> <raymens> not sure if that's related to that error
<stellar-slack> <raymens> they should be able to read from the other stores though
<stellar-slack> <raymens> check https://www.stellar.org/developers/stellar-core/learn/core-data-flow.pdf for (some) more technical information about this
<stellar-slack> <armed10> it's something about the database that gives the duplicate child work error
<stellar-slack> <armed10> Apparently the error was caused by having 2 places to write history to
<stellar-slack> <armed10> I'm still not getting the different node types entirely, one note, who has 2 peers gives me 3 agrees and 2 fails on a ledger
<stellar-slack> <raymens> All 3 nodes are validators?
<stellar-slack> <armed10> well not entirely, 2 nodes have only eachother as preferred nodes and 1 had all three
<stellar-slack> <armed10> And how would you add a full validator to an existing network?
<stellar-slack> <armed10> so all nodes are validator but only one is full validator
<stellar-slack> <raymens> What's the difference between a validator and a full validator?
<stellar-slack> <eva> @raymens: here's a chart with the 4 different kinds of nodes: https://www.stellar.org/developers/stellar-core/learn/admin.html#level-of-participation-to-the-network
<stellar-slack> <raymens> thanks eva, so it's just the publishing that's different
<stellar-slack> <eva> yeah :)
<stellar-slack> <raymens> @armed10 so you can just add it by letting it catchup COMPLETE to the current network
<stellar-slack> <raymens> (that you invoke by calling `http://{node}/catchup?ledger=1&mode=complete`)
<stellar-slack> <armed10> so imagine this scenario: you have a running stellar network. it has 5 validators that know each other. now you want to add a 6th validator, but its validation is only useful if other validators consider him a validator
<stellar-slack> <armed10> in the quorem
<stellar-slack> <armed10> so you have to shut down each server, edit the config and reboot?
<stellar-slack> <raymens> you have to manually add him to the quorum set
<stellar-slack> <raymens> and so far as I know, yes
<stellar-slack> <armed10> That would be inconvenient on a larger scale
dobson has quit [Ping timeout: 240 seconds]
dobson has joined #stellar-dev
edubai__ has quit [Ping timeout: 252 seconds]
ggherdov` has quit [Ping timeout: 252 seconds]
sdehaan has quit [Ping timeout: 240 seconds]
termos has quit [Ping timeout: 240 seconds]
olinkl has quit [Ping timeout: 250 seconds]
Kwelstr has quit [Ping timeout: 252 seconds]
TheSeven has quit [Ping timeout: 260 seconds]
<stellar-slack> <mvaneijk> I guess there are a few issues about that topic in the stellar-core github repo
sdehaan has joined #stellar-dev
olinkl has joined #stellar-dev
edubai__ has joined #stellar-dev
ggherdov` has joined #stellar-dev
TheSeven has joined #stellar-dev
termos has joined #stellar-dev
Kwelstr has joined #stellar-dev
triblit has quit [Quit: http://www.kiwiirc.com/ - A hand crafted IRC client]