
Rover on Mars: Exploration, Functions, and Control
Mars exploration has been a major focus for space agencies worldwide, with an increasing interest in uncovering the mysteries of the Red Planet. Mars rovers, robotic vehicles designed for planetary exploration, have been essential in providing critical data about the planet’s terrain, atmosphere, and potential for life. These rovers are packed with cutting-edge technology, enabling them to carry out scientific experiments and traverse the challenging Martian landscape.
In this essay, we will discuss the primary functions of Mars rovers, the mechanisms used for locking and securing these rovers during missions, and how they are remotely controlled from Earth.
Consistency:
Every read receives the most recent write or an error.
In a consistent system, all nodes see the same data at the same time.
It implies strict ACID-compliant transactions.
Availability:
Every request (read or write) receives a response (either success or failure).
The system remains operational and accessible even during failures.
High availability ensures system uptime but may not guarantee the latest data.
Partition Tolerance:
The system continues to operate despite arbitrary network partitions.
Network failures (i.e., message loss or delay) are inevitable, especially in distributed environments.
A partition-tolerant system must function even if communication between nodes is broken.
Brewer’s theorem essentially dictates that in a real-world distributed system, you can only achieve two out of the three guarantees:
CP (Consistency and Partition Tolerance): Prioritizes consistency and partition tolerance at the cost of availability.
AP (Availability and Partition Tolerance): Prioritizes availability and partition tolerance but sacrifices strict consistency.
CA (Consistency and Availability): Impossible in the presence of network partitions, making it largely theoretical.
Amazon DynamoDB (AP System)
Amazon’s DynamoDB is an AP system that favors availability and partition tolerance. It adopts an eventual consistency model, which means that after a write, all replicas will eventually be consistent, but there may be temporary discrepancies between them.
Eventual Consistency: Dynamo allows stale reads to maintain high availability during partitioning, eventually bringing all nodes into consistency as network failures resolve.
Google Spanner (CP System)
Google Spanner is a globally distributed, strongly consistent database. It achieves consistency and partition tolerance through synchronized clocks and multi-version concurrency control.
Spanner uses TrueTime, a global clock synchronization mechanism, to provide globally consistent, serializable transactions across distributed nodes.
Trade-off: Spanner favors consistency and partition tolerance, but achieving such strong guarantees leads to higher operational complexity and performance trade-offs.
MongoDB (Configurable AP/CP Behavior)
MongoDB provides tunable consistency and availability. Developers can choose between strong consistency (CP) and eventual consistency (AP) depending on use cases by adjusting write concern and read preference settings.
Trade-off: This configurability makes MongoDB suitable for a variety of applications, from content management systems (where eventual consistency is acceptable) to financial transactions (where strict consistency is necessary).
Consistency over Availability (CP Systems)
In systems that prioritize consistency, the idea is to ensure that all replicas of the data reflect the most recent state, even at the cost of availability during network partitions.
Example:
HBase (a distributed database) favors strong consistency and sacrifices availability when network partitions occur. It guarantees that reads always reflect the latest writes.
Use Case:
Systems where data correctness is critical, such as banking or financial applications. Consistency in monetary transactions is non-negotiable.
Availability over Consistency (AP Systems)
Systems that prioritize availability are designed to remain operational, serving requests even during network partitions, but they may serve stale data (lack consistency).
Example:
Cassandra, a NoSQL database, prioritizes availability and partition tolerance. In case of network partitioning, it may provide stale reads, but the system remains available.
Use Case:
Applications like social networks or e-commerce where uptime is crucial, and stale data (such as slightly out-of-date product listings) is acceptable in the short term.
Partition Tolerance as a Must-Have
Partition tolerance is not optional for modern distributed systems, especially cloud-based architectures, where the possibility of network failures between geographically distributed nodes is high. Therefore, the CAP trade-off is effectively between consistency and availability.