Pose Estimators
WPILib includes pose estimators for differential, swerve and mecanum drivetrains. These estimators are designed to be drop-in replacements for the existing odometry classes that also support fusing latency-compensated robot pose estimates with encoder and gyro measurements. These estimators can account for encoder drift and noisy vision data. These estimators can behave identically to their corresponding odometry classes if only update
is called on these estimators.
Pose estimators estimate robot position using a state-space system with the states \(\begin{bmatrix}x & y & \theta \end{bmatrix}^T\), which can represent robot position as a Pose2d
. WPILib includes DifferentialDrivePoseEstimator
, SwerveDrivePoseEstimator
and MecanumDrivePoseEstimator
to estimate robot position. In these, users call update
periodically with encoder and gyro measurements (same as the odometry classes) to update the robot’s estimated position. When the robot receives measurements of its field-relative position (encoded as a Pose2d
) from sensors such as computer vision or V-SLAM, the pose estimator latency-compensates the measurement to accurately estimate robot position.
Here’s how to initialize a DifferentialDrivePoseEstimator
:
86 private final DifferentialDrivePoseEstimator m_poseEstimator =
87 new DifferentialDrivePoseEstimator(
88 m_kinematics,
89 m_gyro.getRotation2d(),
90 m_leftEncoder.getDistance(),
91 m_rightEncoder.getDistance(),
92 new Pose2d(),
93 VecBuilder.fill(0.05, 0.05, Units.degreesToRadians(5)),
94 VecBuilder.fill(0.5, 0.5, Units.degreesToRadians(30)));
158 frc::DifferentialDrivePoseEstimator m_poseEstimator{
159 m_kinematics,
160 m_gyro.GetRotation2d(),
161 units::meter_t{m_leftEncoder.GetDistance()},
162 units::meter_t{m_rightEncoder.GetDistance()},
163 frc::Pose2d{},
164 {0.01, 0.01, 0.01},
165 {0.1, 0.1, 0.1}};
Add odometry measurements every loop by calling Update()
.
227 m_poseEstimator.update(
228 m_gyro.getRotation2d(), m_leftEncoder.getDistance(), m_rightEncoder.getDistance());
84 m_poseEstimator.Update(m_gyro.GetRotation2d(),
85 units::meter_t{m_leftEncoder.GetDistance()},
86 units::meter_t{m_rightEncoder.GetDistance()});
Add vision pose measurements occasionally by calling AddVisionMeasurement()
.
236 // Compute the robot's field-relative position exclusively from vision measurements.
237 Pose3d visionMeasurement3d =
238 objectToRobotPose(m_objectInField, m_robotToCamera, m_cameraToObjectEntry);
239
240 // Convert robot pose from Pose3d to Pose2d needed to apply vision measurements.
241 Pose2d visionMeasurement2d = visionMeasurement3d.toPose2d();
242
243 // Apply vision measurements. For simulation purposes only, we don't input a latency delay -- on
244 // a real robot, this must be calculated based either on known latency or timestamps.
245 m_poseEstimator.addVisionMeasurement(visionMeasurement2d, Timer.getFPGATimestamp());
93 // Compute the robot's field-relative position exclusively from vision
94 // measurements.
95 frc::Pose3d visionMeasurement3d = ObjectToRobotPose(
96 m_objectInField, m_robotToCamera, m_cameraToObjectEntryRef);
97
98 // Convert robot's pose from Pose3d to Pose2d needed to apply vision
99 // measurements.
100 frc::Pose2d visionMeasurement2d = visionMeasurement3d.ToPose2d();
101
102 // Apply vision measurements. For simulation purposes only, we don't input a
103 // latency delay -- on a real robot, this must be calculated based either on
104 // known latency or timestamps.
105 m_poseEstimator.AddVisionMeasurement(visionMeasurement2d,
106 frc::Timer::GetFPGATimestamp());
Réglage des estimateurs de pose
All pose estimators offer user-customizable standard deviations for model and measurements (defaults are used if you don’t provide them). Standard deviation is a measure of how spread out the noise is for a random signal. Giving a state a smaller standard deviation means it will be trusted more during data fusion.
For example, increasing the standard deviation for measurements (as one might do for a noisy signal) would lead to the estimator trusting its state estimate more than the incoming measurements. On the field, this might mean that the filter can reject noisy vision data well, at the cost of being slow to correct for model deviations. While these values can be estimated beforehand, they very much depend on the unique setup of each robot and global measurement method.
When incorporating AprilTag poses, make the vision heading standard deviation very large, make the gyro heading standard deviation small, and scale the vision x and y standard deviation by distance from the tag.