
raftlock
is a fault-tolerant distributed lock service for coordinating distributed applications. It is written in Go and is designed to be a reliable and highly available solution for managing distributed locks.
In distributed systems, coordinating actions between different services or nodes can be a complex challenge. raftlock
provides a simple and robust solution for this by offering a distributed locking mechanism. This ensures that only one process can access a particular resource at a time, preventing data corruption and ensuring consistency across your distributed application.
The project is built on top of the Raft consensus algorithm, which guarantees fault tolerance. This means that raftlock
can withstand failures of some of its nodes without losing availability or data.
raftlock
is useful for a variety of use cases in distributed systems, including:
- Leader Election: Electing a single leader from a group of nodes to perform a specific task.
- Distributed Cron Jobs: Ensuring that a scheduled task is executed by only one node in a cluster.
- Resource Locking: Preventing multiple processes from concurrently modifying a shared resource, such as a file or a database record.
- Distributed Semaphores: Limiting the number of concurrent processes that can access a pool of resources.
By providing a reliable and easy-to-use distributed lock service, raftlock
simplifies the development of robust and scalable distributed applications.
RaftLock is composed of several key packages, each handling a specific aspect of the distributed lock service.
server/
: Implements the gRPC server that exposes the distributed locking API to clients. It handles request validation, leader redirection, and coordination with the underlying Raft consensus. For more details, see the Server Package README.raft/
: Contains a custom-built implementation of the Raft consensus algorithm. This is the core engine for maintaining strong consistency and fault tolerance across the cluster. Dive deeper into its design in the Raft Consensus Module README.lock/
: Provides the distributed lock manager, which acts as the application-level state machine for the Raft cluster. It manages lock states, expirations, and waiter queues. Learn more in the Distributed Lock Manager README.storage/
: Offers a durable and crash-resilient persistence layer for Raft data, including persistent state, log entries, and snapshots. Details can be found in the RaftLock Storage Package README.client/
: Provides the Go client library for applications to communicate with a RaftLock cluster, offering interfaces for standard lock operations, administration, and advanced features. For usage, refer to the Client Package README.proto/
: Defines the Protocol Buffer messages and gRPC service for the RaftLock API, ensuring type-safe and efficient communication between clients and the server. See the RaftLock Protocol Buffers README.
You can run the raftlock
server using either Docker or by building from the source.
This is the recommended way to run raftlock
. We use docker-compose
to easily manage a multi-node cluster.
-
Start the cluster: This command will build the Docker images and start a 3-node
raftlock
cluster in the background.docker-compose up --build -d
-
Check the logs: You can monitor the logs of each node to see the cluster formation and leader election process.
docker-compose logs node1 docker-compose logs node2 docker-compose logs node3
-
Stop the cluster: To stop and remove the containers, networks, and volumes, run:
docker-compose down -v
If you prefer to build from the source code:
-
Clone the repository:
git clone https://github.com/jathurchan/raftlock.git cd raftlock
-
Build the binary:
go build
-
Run a server node: You’ll need to run multiple instances on different ports to form a cluster. See the Configuration section for more details.
./raftlock --id node1 --api-addr ":8080" --raft-addr ":12379" --raft-bootstrap
To see raftlock
in action, you can run the payment example located in the examples/payment
directory. This example demonstrates how to use a distributed lock to ensure a payment process is handled by only one node at a time, preventing double spending.
Run the example with the following command:
go run examples/payment/main.go --payment-id payment456 --client-id client002 --servers localhost:8080
This will start a client that interacts with the raftlock
server to acquire a lock before processing a mock payment. You will see output indicating whether the lock was acquired and the payment was processed.
You can interact with raftlock
using its simple REST API. Here are some examples using curl
.
To acquire a lock, send a POST
request to the /lock
endpoint with the resource
you want to lock and a ttl
(time-to-live) in seconds.
curl -X POST -H "Content-Type: application/json" -d '{
"resource": "my-critical-resource",
"ttl": 30
}' http://localhost:8080/lock
If successful, you will receive a lock_id
.
To release a lock, send a POST
request to the /unlock
endpoint with the lock_id
you received when acquiring the lock.
curl -X POST -H "Content-Type: application/json" -d '{
"lock_id": "your-lock-id-here"
}' http://localhost:8080/unlock
If you have any questions or encounter any issues while using raftlock
, please feel free to open an issue on the GitHub repository. We will do our best to help you as soon as possible.
We welcome contributions from the community! If you are interested in contributing to raftlock
, please follow these steps:
- Fork the repository on GitHub.
- Create a new branch for your feature or bug fix:
git checkout -b my-new-feature
. - Make your changes and commit them with clear, descriptive messages.
- Run the tests to ensure everything is working:
go test ./...
. - Push your branch to your fork:
git push origin my-new-feature
. - Submit a pull request to the
main
branch of thejathurchan/raftlock
repository.
raftlock
is maintained by a team of dedicated developers:
- Jathurchan Selvakumar
- Patrice Zhou
- Mathusan Selvakumar
We welcome contributions from the community! If you are interested in contributing to raftlock
, please fork the repository and submit a pull request.

Leave a Reply