On each etcd node, specify the cluster members: TOKEN=token-01 CLUSTER_STATE=new NAME_1=machine-1 NAME_2=machine-2 NAME_3=machine-3 HOST_1=10.240.0.17 HOST_2=10.240.0.18 HOST_3=10.240.0.19 CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 Run this on each machine: # For machine 1THIS_NAME=${NAME_1}THIS_IP=${HOST_1}etcd --data-dir=data.etcd --name ${THIS_NAME}\ --initial-advertise-peer-urls http://${THIS_IP}:2380 --listen-peer-urls http://$...| etcd
Prerequisites Install etcdctl Procedure Use the get subcommand to read from etcd: $ etcdctl --endpoints=$ENDPOINTS get foo foo Hello World! $ where: foo is the requested key Hello World! is the retrieved value Or, for formatted output: $ etcdctl --endpoints=$ENDPOINTS --write-out="json" get foo {"header":{"cluster_id":289318470931837780,"member_id":14947050114012957595,"revision":3,"raft_term":4, "kvs":[{"key":"Zm9v","create_revision":2,"mod_revision":3,"version":2,"value":"SGVsbG8gV29ybGQh"}...| etcd
Prerequisites Install etcdctl Procedure Use the put subcommand to write a key-value pair: etcdctl --endpoints=$ENDPOINTS put foo "Hello World!" where: foo is the key name "Hello World!" is the quote-delimited value| etcd
Pre-requisites Install etcdctl Setup a local cluster Get keys by prefix $ etcdctl --endpoints=$ENDPOINTS get PREFIX --prefix Global Options --endpoints=[127.0.0.1:2379], gRPC endpoints Options --prefix, get a range of keys with matching prefix Example etcdctl --endpoints=$ENDPOINTS put web1 value1 etcdctl --endpoints=$ENDPOINTS put web2 value2 etcdctl --endpoints=$ENDPOINTS put web3 value3 etcdctl --endpoints=$ENDPOINTS get web --prefix| etcd
Prerequisites Install etcd and etcdctl Add or delete keys del to remove the specified key or range of keys: etcdctl del $KEY[$END_KEY] Options --prefix[=false]: delete keys with matching prefix --prev-kv[=false]: return deleted key-value pairs --from-key[=false]: delete keys that are greater than or equal to the given key using byte compare --range[=false]: delete range of keys without delay Options inherited from parent commands --endpoints="127.0.0.1:2379": gRPC endpoints Examples etcdctl -...| etcd
Prerequisites Install etcd and etcdctl. A running etcd cluster. Terminology Here are definations of some key terms used in the Example below. TermsDefination etcdctlThe command line tool for interacting with the etcd server. txn commandtxn command is an abbreviation for “transaction”. It reads multiple etcd requests from standard input and applies them as a single atomic transaction. A transaction consists of list of conditions, a list of requests to apply if all the conditions are true, ...| etcd
Prerequisites Install etcd and etcdctl Watching keys watch to get notified of future changes: etcdctl watch $KEY[$END_KEY] Options -i, --interactive[=false]: interactive mode --prefix[=false]: watch on a prefix if prefix is set--rev=0: Revision to start watching --prev-kv[=false]: get the previous key-value pair before the event happens --progress-notify[=false]: get periodic watch progress notification from server Options inherited from parent commands --endpoints="127.0.0.1:2379": gRPC endp...| etcd
lease to write with TTL: etcdctl --endpoints=$ENDPOINTS lease grant 300# lease 2be7547fbc6a5afa granted with TTL(300s)etcdctl --endpoints=$ENDPOINTS put sample value --lease=2be7547fbc6a5afa etcdctl --endpoints=$ENDPOINTS get sample etcdctl --endpoints=$ENDPOINTS lease keep-alive 2be7547fbc6a5afa etcdctl --endpoints=$ENDPOINTS lease revoke 2be7547fbc6a5afa # or after 300 secondsetcdctl --endpoints=$ENDPOINTS get sample| etcd
lock for distributed lock: etcdctl --endpoints=$ENDPOINTS lock mutex1 # another client with the same name blocksetcdctl --endpoints=$ENDPOINTS lock mutex1| etcd
Prerequisites Ensure etcd and etcdctl is installed. Check for active etcd cluster. elect for leader election: The etcdctl command is used to conduct leader elections in an etcd cluster. It makes sure that only one client become leader at a time. Ensure the ENDPOINTS variable is set with the addresses of each etcd cluster members. Set a unique name for the election for different clients (’one’ in the given code below). Lastly, set different leaders name for each clients (p1 and p2). Comman...| etcd
Prerequisites Install etcd and etcdctl Check Overall Status endpoint status to check the overall status of each endpoint specified in --endpoints flag: etcdctl endpoint status (--endpoints=$ENDPOINTS|--cluster) Options --cluster[=false]: use all endpoints from the cluster member list Check Health endpoint health to check the healthiness of each endpoint specified in --endpoints flag: etcdctl endpoint health (--endpoints=$ENDPOINTS|--cluster) Options --cluster[=false]: use all endpoints from t...| etcd
Pre-requisites Install etcdctl, etcdutl Setup a local cluster Snapshot a database snapshot to save point-in-time snapshot of etcd database: etcdctl --endpoints=$ENDPOINT snapshot save DB_NAME Global Options etcdctl --endpoints=[127.0.0.1:2379], gRPC endpoints Snapshot can only be requested from one etcd node, so --endpoints flag should contain only one endpoint. etcdutl -w, --write-out string set the output format (fields, json, protobuf, simple, table)(default "simple") Example ENDPOINTS=$HO...| etcd
migrate to transform etcd v2 to v3 data: # write key in etcd version 2 storeexportETCDCTL_API=2etcdctl --endpoints=http://$ENDPOINTset foo bar # read key in etcd v2etcdctl --endpoints=$ENDPOINTS --output="json" get foo # stop etcd node to migrate, one by one# migrate v2 dataexportETCDCTL_API=3etcdctl --endpoints=$ENDPOINT migrate --data-dir="default.etcd" --wal-dir="default.etcd/member/wal"# restart etcd node after migrate, one by one# confirm that the key got migratedetcdctl --endpoints=$END...| etcd
member to add,remove,update membership: # For each machineTOKEN=my-etcd-token-1 CLUSTER_STATE=new NAME_1=etcd-node-1 NAME_2=etcd-node-2 NAME_3=etcd-node-3 HOST_1=10.240.0.13 HOST_2=10.240.0.14 HOST_3=10.240.0.15 CLUSTER=${NAME_1}=http://${HOST_1}:2380,${NAME_2}=http://${HOST_2}:2380,${NAME_3}=http://${HOST_3}:2380 # For node 1THIS_NAME=${NAME_1}THIS_IP=${HOST_1}etcd --data-dir=data.etcd --name ${THIS_NAME}\ --initial-advertise-peer-urls http://${THIS_IP}:2380 \ --listen-peer-urls http://${THI...| etcd
Discovery service protocol helps new etcd member to discover all other members in cluster bootstrap phase using a shared discovery URL. Discovery service protocol is only used in cluster bootstrap phase, and cannot be used for runtime reconfiguration or cluster monitoring. The protocol uses a new discovery token to bootstrap one unique etcd cluster. Remember that one discovery token can represent only one etcd cluster. As long as discovery protocol on this token starts, even if it fails halfw...| etcd
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases. etcd stores data in a multiversion persistent key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is super...| etcd
This API reference is autogenerated from the named .proto files. service Auth (api/etcdserverpb/rpc.proto) Method Request Type Response Type Description AuthEnable AuthEnableRequest AuthEnableResponse AuthEnable enables authentication. AuthDisable AuthDisableRequest AuthDisableResponse AuthDisable disables authentication. AuthStatus AuthStatusRequest AuthStatusResponse AuthStatus displays authentication status. Authenticate AuthenticateRequest AuthenticateResponse Authenticate processes an au...| etcd
etcd API central design overview| etcd
Organization of the etcd project's golang modules| etcd
Discover other etcd members in a cluster bootstrap phase| etcd
API guarantees made by etcd| etcd