Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

first changes to make the docker works with #5

Merged
merged 9 commits into from
Oct 30, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
# CHANGELOG
## 1.1.0
- modification de chemin pour pouvoir passer dans la gpao
- coupure des chemins de fichiers en chemins de répertoires/nom de fichiers pour pouvoir les utiliser sur docker + store
- patchwork vérifie maintenant s'il y a un ficheir csv en entrée. Si c'est le cas, le fichier donneur utilisé est celui qui correspond au fichier receveur dans le fichier csv. S'il n'y a pas de fichier donneur correspondant, patchwork termine sans rien faire
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo :

Suggested change
- patchwork vérifie maintenant s'il y a un ficheir csv en entrée. Si c'est le cas, le fichier donneur utilisé est celui qui correspond au fichier receveur dans le fichier csv. S'il n'y a pas de fichier donneur correspondant, patchwork termine sans rien faire
- patchwork vérifie maintenant s'il y a un fichier csv en entrée. Si c'est le cas, le fichier donneur utilisé est celui qui correspond au fichier receveur dans le fichier csv. S'il n'y a pas de fichier donneur correspondant, patchwork termine sans rien faire


## 1.0.0
version initiale :
Expand Down
29 changes: 23 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,18 +26,35 @@ conda activate patchwork
```
## utilisation

Le script peut être lancé via :
Le script d'ajout de points peut être lancé via :
```
python main.py filepath.DONOR_FILE=[chemin fichier donneur] filepath.RECIPIENT_FILE=[chemin fichier receveur] filepath.OUTPUT_FILE=[chemin fichier de sortie] [autres options]
```
Les différentes options, modifiables soit dans le fichierconfigs/configs_patchwork.yaml, soit en ligne de commande comme indiqué juste au-dessus :

filepath.DONOR_FILE : Le chemin du fichier qui peut donner des points à ajouter
filepath.RECIPIENT_FILE : Le chemin du fichier qui va obtenir des points en plus
filepath.OUTPUT_FILE : Le chemin du fichier en sortie
filepath.OUTPUT_INDICES_MAP : Le chemin de sortie du fichier d'indice
filepath.INPUT_INDICES_MAP : Le chemin vers le fichier d'indice en entrée, si on en a un. Autrement, à laisser à "null"
filepath.DONOR_DIRECTORY : Le répertoire du fichier qui peut donner des points à ajouter
filepath.DONOR_NAME : Le nom du fichier qui peut donner des points à ajouter
filepath.RECIPIENT_DIRECTORY : Le répertoire du fichier qui va obtenir des points en plus
filepath.RECIPIENT_NAME : Le nom du fichier qui va obtenir des points en plus
filepath.OUTPUT_DIR : Le répertoire du fichier en sortie
filepath.OUTPUT_NAME : Le nom du fichier en sortie
filepath.OUTPUT_INDICES_MAP_DIR : Le répertoire de sortie du fichier d'indice
filepath.OUTPUT_INDICES_MAP_NAME : Le nom de sortie du fichier d'indice

DONOR_CLASS_LIST : Défaut [2, 9]. La liste des classes des points du fichier donneur qui peuvent être ajoutés.
RECIPIENT_CLASS_LIST : Défaut [2, 3, 9, 17]. La liste des classes des points du fichier receveur qui, s'ils sont absents dans une cellule, justifirons de prendre les points du fichier donneur de la même cellule
TILE_SIZE : Défaut 1000. Taille du côté de l'emprise carrée représentée par les fichiers lidar d'entrée
PATCH_SIZE : Défaut 1. taille en mètre du côté d'une cellule (doit être un diviseur de TILE_SIZE, soit pour 1000 : 0.25, 0.5, 2, 4, 5, 10, 25...)

Le script de sélection/découpe de fichier lidar peut être lancé via :
```
python lidar_filepath.py filepath.DONOR_DIRECTORY=[répertoire_fichiers_donneurs] filepath.RECIPIENT_DIRECTORY=[répertoire_fichiers_receveurs] filepath.SHP_NAME=[nom_shapefile] filepath.SHP_DIRECTORY=[répertoire_shapefile] filepath.CSV_NAME=[nom_fichier_csv] filepath.CSV_DIRECTORY=[répertoire_fichier_csv] filepath.OUTPUT_DIRECTORY=[chemin_de_sortie]
```

filepath.DONOR_DIRECTORY: Le répertoire contenant les fichiers lidar donneurs
filepath.RECIPIENT_DIRECTORY: Le répertoire contenant les fichiers lidar receveurs
filepath.SHP_NAME: Le nom du shapefile contenant l'emprise du chantier qui délimite les fichiers lidar qui nous intéressent
filepath.SHP_DIRECTORY: Le répertoire du fichier shapefile
filepath.CSV_NAME: Le nom du fichier csv qui lie les différents fichiers donneurs et receveurs
filepath.CSV_DIRECTORY: Le répertoire du fichier csv
filepath.OUTPUT_DIRECTORY: le répertoire recevant les fichiers lidar découpés
30 changes: 22 additions & 8 deletions configs/configs_patchwork.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,30 @@ defaults:
- _self_

filepath:
SHAPEFILE_PATH: null # shapefile for lidar selecter, to determine the lidar file to select
SHP_NAME: null # name of the shapefile for lidar selecter, to determine the lidar file to select
SHP_DIRECTORY: null # path to the directory containing the shapefile
DONOR_DIRECTORY: null # directory containing all potential donor lidar files, for lidar selecter
RECIPIENT_DIRECTORY: null # directory containing all potential donor lidar files, for lidar selecter
OUTPUT_DIRECTORY_PATH: null # directory containing all potential donor lidar files, for lidar selecter
RECIPIENT_FILE: null # path to the file that receives points. If done after lidar selecter, is in a subdirectory of OUTPUT_DIRECTORY_PATH
DONOR_FILE: null # path to the file that gives points. If done after lidar selecter, is in a subdirectory of OUTPUT_DIRECTORY_PATH
OUTPUT_FILE: null # path to the (resulting) file with added points.
INPUT_INDICES_MAP: null # path for the indices map reflecting the changes to the recipient
OUTPUT_INDICES_MAP: null
CSV_PATH: null # path to the csv file that log the lidar files to process with patchwork
OUTPUT_DIRECTORY: null # directory containing all potential donor lidar files, for lidar selecter

# OUTPUT_FILE: null # path to the (resulting) file with added points.
leavauchier marked this conversation as resolved.
Show resolved Hide resolved
OUTPUT_DIR: null # directory of the file with added points, from patchwork.
OUTPUT_NAME: null # name of the file with added points, from patchwork.

INPUT_INDICES_MAP_DIR: null
INPUT_INDICES_MAP_NAME: null

OUTPUT_INDICES_MAP_DIR: null # path to the directory for the indices map reflecting the changes to the recipient, from patchwork
OUTPUT_INDICES_MAP_NAME: null # name of the indices map reflecting the changes to the recipient, from patchwork

# INPUT_DIRECTORY: null # directory for input (shapefile)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

code commenté à enlever

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

up

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

up

CSV_NAME: null # name of the csv file that log the lidar files to process with patchwork
CSV_DIRECTORY: null # path to the directory that will contain the csv

DONOR_NAME: null # name of the donor file for patchwork
RECIPIENT_NAME: null # name of the recipient file for patchwork



CRS: 2154

Expand Down
15 changes: 11 additions & 4 deletions exemples/lidar_selecter_example.sh
Original file line number Diff line number Diff line change
@@ -1,11 +1,18 @@
# for selecting, cutting and dispatching lidar files for patchwork
python lidar_filepath.py \
filepath.SHAPEFILE_PATH=[path_to_shapfile] \
filepath.DONOR_DIRECTORY=[path_to_directory_with_donor_files] \
filepath.RECIPIENT_DIRECTORY=[path_to_directory_with_recipient_files] \
filepath.OUTPUT_DIRECTORY_PATH=[output_directory_path]
filepath.SHP_NAME=[shapefile_name] \
filepath.SHP_DIRECTORY=[path_to_shapefile_file] \
filepath.CSV_NAME=[csv_file_name] \
filepath.CSV_DIRECTORY=[path_to_csv_file] \
filepath.OUTPUT_DIRECTORY=[output_directory_path]

# filepath.SHAPEFILE_PATH: the shapefile that contains the geometry we want to work on
# filepath.DONOR_DIRECTORY: The directory containing all the lidar files that could provide points
# filepath.RECIPIENT_DIRECTORY: The directory containing all the lidar files that could receive points
# filepath.OUTPUT_DIRECTORY_PATH: the directory to put all the selected/cut lidar files
# filepath.SHP_NAME: the name of the shapefile defining the area used to select the lidar files
# filepath.SHP_DIRECTORY: the directory of the shapefile
# filepath.CSV_NAME: the name of the csv file tin which we link donor and recipient files
# filepath.CSV_DIRECTORY: the directory of the csv file
# filepath.OUTPUT_DIRECTORY: the directory to put all the cut lidar files

24 changes: 16 additions & 8 deletions exemples/patchwork_example.sh
Original file line number Diff line number Diff line change
@@ -1,12 +1,20 @@
# for selecting, cutting and dispatching lidar files for patchwork
python main.py \

filepath.DONOR_FILE=[donor_file_path]
filepath.RECIPIENT_FILE=[recipient_file_path]
filepath.OUTPUT_FILE=[output_file_path]
filepath.OUTPUT_INDICES_MAP=[output_indices_map_path]
filepath.DONOR_DIRECTORY=[donor_file_dir]
filepath.DONOR_NAME=[donor_file_name]
filepath.RECIPIENT_DIRECTORY=[recipient_file_dir]
filepath.RECIPIENT_NAME=[recipient_file_name]
filepath.OUTPUT_DIR=[output_file_dir]
filepath.OUTPUT_NAME=[output_file_name]
filepath.OUTPUT_INDICES_MAP_DIR=[output_indices_map_dir]
filepath.OUTPUT_INDICES_MAP_NAME=[output_indices_map_name]

# filepath.DONOR_FILE: the path to the lidar file we will add points from
# filepath.RECIPIENT_FILE: the path to the lidar file we will add points to
# filepath.OUTPUT_FILE: the path to the resulting lidar file
# filepath.OUTPUT_INDICES_MAP: the path to the map with indices displaying where points have been added
# filepath.DONOR_DIRECTORY: the directory to the lidar file we will add points from
# filepath.DONOR_NAME: the name of the lidar file we will add points from
# filepath.RECIPIENT_DIRECTORY: the directory to the lidar file we will add points to
# filepath.RECIPIENT_NAME: the name of the lidar file we will add points to
# filepath.OUTPUT_DIR: the directory to the resulting lidar file
# filepath.OUTPUT_NAME: the directory of the resulting lidar file
# filepath.OUTPUT_INDICES_MAP_DIR: the directory to the map with indices displaying where points have been added
# filepath.OUTPUT_INDICES_MAP_NAME: the name of the map with indices displaying where points have been added
7 changes: 5 additions & 2 deletions indices_map.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import os

import numpy as np
from omegaconf import DictConfig
import rasterio as rs
Expand Down Expand Up @@ -38,9 +40,10 @@ def create_indices_map(config: DictConfig, df_points: DataFrame):
corner_x, corner_y = get_tile_origin_from_pointcloud(config, df_points)

grid = create_indices_grid(config, df_points)
output_indices_map_path = os.path.join(config.filepath.OUTPUT_INDICES_MAP_DIR, config.filepath.OUTPUT_INDICES_MAP_NAME)

transform = from_origin(corner_x, corner_y, config.PATCH_SIZE, config.PATCH_SIZE)
indices_map = rs.open(config.filepath.OUTPUT_INDICES_MAP, 'w', driver='GTiff',
indices_map = rs.open(output_indices_map_path, 'w', driver='GTiff',
height=grid.shape[0], width=grid.shape[1],
count=1, dtype=str(grid.dtype),
crs=config.CRS,
Expand All @@ -50,7 +53,7 @@ def create_indices_map(config: DictConfig, df_points: DataFrame):


def read_indices_map(config: DictConfig):
indices_map = rs.open(config.filepath.INPUT_INDICES_MAP)
indices_map = rs.open(os.path.join(config.filepath.INPUT_INDICES_MAP_DIR, config.filepath.INPUT_INDICES_MAP_NAME))
transformer = indices_map.get_transform()
grid = indices_map.read()
grid = grid[0]
Expand Down
9 changes: 4 additions & 5 deletions lidar_selecter.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def patchwork_dispatcher(config: DictConfig):
# preparing donor files:
select_lidar(config,
config.filepath.DONOR_DIRECTORY,
config.filepath.OUTPUT_DIRECTORY_PATH,
config.filepath.OUTPUT_DIRECTORY,
c.DONOR_SUBDIRECTORY_NAME,
df_result,
c.DONOR_FILE_KEY,
Expand All @@ -37,14 +37,13 @@ def patchwork_dispatcher(config: DictConfig):
# preparing recipient files:
select_lidar(config,
config.filepath.RECIPIENT_DIRECTORY,
config.filepath.OUTPUT_DIRECTORY_PATH,
config.filepath.OUTPUT_DIRECTORY,
c.RECIPIENT_SUBDIRECTORY_NAME,
df_result,
c.RECIPIENT_FILE_KEY,
False,
)

df_result.to_csv(config.filepath.CSV_PATH, index=False)
df_result.to_csv(os.path.join(config.filepath.CSV_DIRECTORY, config.filepath.CSV_NAME), index=False)


def cut_lidar(las_points: ScaleAwarePointRecord, shapefile_geometry: MultiPolygon) -> ScaleAwarePointRecord:
Expand Down Expand Up @@ -79,7 +78,7 @@ def select_lidar(config: DictConfig,
Finally, df_result is updated with the path for each file
"""

worksite = gpd.GeoDataFrame.from_file(config.filepath.SHAPEFILE_PATH)
worksite = gpd.GeoDataFrame.from_file(os.path.join(config.filepath.SHP_DIRECTORY, config.filepath.SHP_NAME))
shapefile_geometry = worksite.dissolve().geometry.item()

time_old = timeit.default_timer()
Expand Down
55 changes: 47 additions & 8 deletions patchwork.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@

from shutil import copy2
from typing import List, Tuple
import os
from pathlib import Path

from omegaconf import DictConfig

Expand All @@ -9,6 +11,7 @@
import laspy
from laspy import ScaleAwarePointRecord, LasReader

import constants as c
from tools import get_tile_origin_from_pointcloud, crop_tile
from indices_map import create_indices_map
from constants import CLASSIFICATION_STR, PATCH_X_STR, PATCH_Y_STR
Expand Down Expand Up @@ -67,8 +70,12 @@ def get_type(new_column_size: int):


def get_complementary_points(config: DictConfig) -> pd.DataFrame:
with laspy.open(config.filepath.DONOR_FILE) as donor_file, \
laspy.open(config.filepath.RECIPIENT_FILE) as recipient_file:
donor_dir, donor_name = get_donor_path(config)
donor_file_path = os.path.join(donor_dir, donor_name)
recipient_file_path = os.path.join(config.filepath.RECIPIENT_DIRECTORY, config.filepath.RECIPIENT_NAME)

with laspy.open(donor_file_path) as donor_file, \
laspy.open(recipient_file_path) as recipient_file:
raw_donor_points = donor_file.read().points
donor_points = crop_tile(config, raw_donor_points)
raw_recipient_points = recipient_file.read().points
Expand All @@ -78,8 +85,8 @@ def get_complementary_points(config: DictConfig) -> pd.DataFrame:
tile_origin_donor = get_tile_origin_from_pointcloud(config, donor_points)
tile_origin_recipient = get_tile_origin_from_pointcloud(config, recipient_points)
if tile_origin_donor != tile_origin_recipient:
raise ValueError(f"{config.filepath.DONOR_FILE} and \
{config.filepath.RECIPIENT_FILE} are not on the same area")
raise ValueError(f"{donor_file_path} and \
{recipient_file_path} are not on the same area")

donor_columns = get_field_from_header(donor_file)
df_donor_points = get_selected_classes_points(config,
Expand Down Expand Up @@ -129,8 +136,8 @@ def test_field_exists(file_path: str, colmun: str) -> bool:

def append_points(config: DictConfig, extra_points: pd.DataFrame):
# get field to copy :
recipient_filepath = config.filepath.RECIPIENT_FILE
ouput_filepath = config.filepath.OUTPUT_FILE
recipient_filepath = os.path.join(config.filepath.RECIPIENT_DIRECTORY, config.filepath.RECIPIENT_NAME)
ouput_filepath = os.path.join(config.filepath.OUTPUT_DIR, config.filepath.OUTPUT_NAME)
with laspy.open(recipient_filepath) as recipient_file:
recipient_fields_list = get_field_from_header(recipient_file)

Expand All @@ -154,9 +161,9 @@ def append_points(config: DictConfig, extra_points: pd.DataFrame):

# if we want a new column, we start by adding its name
if config.NEW_COLUMN:
if test_field_exists(config.filepath.RECIPIENT_FILE, config.NEW_COLUMN):
if test_field_exists(recipient_filepath, config.NEW_COLUMN):
raise ValueError(f"{config.NEW_COLUMN} already exists as \
column name in {config.filepath.RECIPIENT_FILE}")
column name in {recipient_filepath}")
new_column_type = get_type(config.NEW_COLUMN_SIZE)
output_las = laspy.read(ouput_filepath)
output_las.add_extra_dim(laspy.ExtraBytesParams(name=config.NEW_COLUMN, type=new_column_type))
Expand All @@ -183,7 +190,39 @@ def append_points(config: DictConfig, extra_points: pd.DataFrame):
output_las.append_points(new_points)


def get_donor_from_csv(recipient_file_path:str, csv_file_path:str)-> str:
"""
check if there is a donor file, in the csv file, matching the recipient file
return the path to that file if it exists
return "" otherwise
"""
df_csv_data = pd.read_csv(csv_file_path)
donor_file_paths = df_csv_data.loc[df_csv_data[c.RECIPIENT_FILE_KEY] == recipient_file_path, c.DONOR_FILE_KEY]
if len(donor_file_paths) > 0:
return donor_file_paths.loc[0] # there should be only one donor file for a given recipient file
return ""


def get_donor_path(config: DictConfig) -> Tuple[str, str]:
"""Return a donor directory and a name:
If there is no csv file provided in config, return DONOR_DIRECTORY and DONOR_NAME
if there is a csv file provided, return DONOR_DIRECTORY and DONOR_NAME matching the given RECIPIENT
if there is a csv file provided but no matching DONOR, return "" twice """
if config.filepath.CSV_DIRECTORY and config.filepath.CSV_NAME :
csv_file_path = os.path.join(config.filepath.CSV_DIRECTORY, config.filepath.CSV_NAME)
recipient_file_path = os.path.join(config.filepath.RECIPIENT_DIRECTORY, config.filepath.RECIPIENT_NAME)
donor_file_path = get_donor_from_csv(recipient_file_path, csv_file_path)
if not donor_file_path: # if there is no matching donor file, we do nothing
return "", ""
return str(Path(donor_file_path).parent), str(Path(donor_file_path).name)
return config.filepath.DONOR_DIRECTORY, config.filepath.DONOR_NAME


def patchwork(config: DictConfig):
_, donor_name = get_donor_path(config)
if not donor_name:
return

complementary_bd_points = get_complementary_points(config)
append_points(config, complementary_bd_points)
create_indices_map(config, complementary_bd_points)
19 changes: 13 additions & 6 deletions test/test_indices_map.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import sys
import os

from hydra import compose, initialize
import numpy as np
Expand Down Expand Up @@ -41,20 +42,23 @@ def test_create_indices_points():


def test_create_indices_map(tmp_path_factory):
tmp_file_path = tmp_path_factory.mktemp("data") / "indices.tif"
tmp_file_dir = tmp_path_factory.mktemp("data")
tmp_file_name = "indices.tif"

with initialize(version_base="1.2", config_path="../configs"):
config = compose(
config_name="configs_patchwork.yaml",
overrides=[
f"PATCH_SIZE={PATCH_SIZE}",
f"TILE_SIZE={TILE_SIZE}",
f"filepath.OUTPUT_INDICES_MAP={tmp_file_path}",
f"filepath.OUTPUT_INDICES_MAP_DIR={tmp_file_dir}",
f"filepath.OUTPUT_INDICES_MAP_NAME={tmp_file_name}",
]
)

df_points = pd.DataFrame(data=DATA_POINTS)
create_indices_map(config, df_points)
raster = rs.open(tmp_file_path)
raster = rs.open(os.path.join(tmp_file_dir, tmp_file_name))
grid = raster.read()

grid = grid.transpose() # indices aren't read the way we want otherwise
Expand All @@ -66,15 +70,17 @@ def test_create_indices_map(tmp_path_factory):


def test_read_indices_map(tmp_path_factory):
tmp_file_path = tmp_path_factory.mktemp("data") / "indices.tif"
tmp_file_dir = tmp_path_factory.mktemp("data")
tmp_file_name = "indices.tif"

with initialize(version_base="1.2", config_path="../configs"):
config = compose(
config_name="configs_patchwork.yaml",
overrides=[
f"PATCH_SIZE={PATCH_SIZE}",
f"TILE_SIZE={TILE_SIZE}",
f"filepath.INPUT_INDICES_MAP={tmp_file_path}",
f"filepath.INPUT_INDICES_MAP_DIR={tmp_file_dir}",
f"filepath.INPUT_INDICES_MAP_NAME={tmp_file_name}",
]
)

Expand All @@ -84,7 +90,8 @@ def test_read_indices_map(tmp_path_factory):
[1, 1, 1],])

transform = from_origin(0, 3, config.PATCH_SIZE, config.PATCH_SIZE)
indices_map = rs.open(config.filepath.INPUT_INDICES_MAP,
output_indices_map_path = os.path.join(config.filepath.INPUT_INDICES_MAP_DIR, config.filepath.INPUT_INDICES_MAP_NAME)
indices_map = rs.open(output_indices_map_path,
'w',
driver='GTiff',
height=grid.shape[0],
Expand Down
Loading