SfmModel
-
class
packnet_sfm.models.SfmModel.SfmModel(depth_net=None, pose_net=None, rotation_mode='euler', flip_lr_prob=0.0, upsample_depth_maps=False, **kwargs)[source] Bases:
torch.nn.modules.module.ModuleModel class encapsulating a pose and depth networks.
- Parameters
depth_net (nn.Module) – Depth network to be used
pose_net (nn.Module) – Pose network to be used
rotation_mode (str) – Rotation mode for the pose network
flip_lr_prob (float) – Probability of flipping when using the depth network
upsample_depth_maps (bool) – True if depth map scales are upsampled to highest resolution
kwargs (dict) – Extra parameters
-
add_depth_net(depth_net)[source] Add a depth network to the model
-
add_loss(key, val)[source] Add a new loss to the dictionary and detaches it.
-
add_pose_net(pose_net)[source] Add a pose network to the model
-
compute_inv_depths(image)[source] Computes inverse depth maps from single images
-
compute_poses(image, contexts)[source] Compute poses from image and a sequence of context images
-
forward(batch, return_logs=False)[source] Processes a batch.
- Parameters
batch (dict) – Input batch
return_logs (bool) – True if logs are stored
- Returns
output – Dictionary containing predicted inverse depth maps and poses
- Return type
dict
-
property
logs Return logs.
-
property
losses Return metrics.
-
property
network_requirements Networks required to run the model
- Returns
requirements –
- depth_netbool
Whether a depth network is required by the model
- pose_netbool
Whether a depth network is required by the model
- Return type
dict
-
property
train_requirements Information required by the model at training stage
- Returns
requirements –
- gt_depthbool
Whether ground truth depth is required by the model at training time
- gt_posebool
Whether ground truth pose is required by the model at training time
- Return type
dict