The accuracy and homogeneity of climate data are indispensable for many aspects of climate research. In particular, a realistic and reliable assessment of historical climate trends and variability is hardly possible without a long-term, homogeneous time series of climate data. Accurate and homogeneous climate data are also indispensable for the calculation of related statistics that are needed and used to define the state of climate and climate extremes. Unfortunately, many kinds of changes (such as instrument/observer changes, and changes in station location/exposure, observing practices/procedure, etc.) that took place in the period of data record could cause non-climatic sudden changes (artificial shifts) in the data time series. Such artificial shifts could have huge impacts on the results of climate analysis, especially those of climate trend analysis. Therefore, artificial shifts shall be eliminated, to the extent possible, from the time series prior to its application, especially its application in climate trend assessment. In this session, detection and correction techniques/algorithms and software will be discussed and inter-compared, as well as their applicability to data series of different temporal resolutions (annual, monthly, daily…) and of different climate elements (temperature, precipitation, pressure, wind, etc) from different observing network densities/characteristics. This session will focus on the latest developments in climate data homogenization, on the practical procedures to apply and inter-compare the different approaches, as well as on the use of homogenized data in the assessment of observed climate trend and variability and in the analysis of climate extremes.