Aaron Bansemer has generated synthetic data to simulate the response of the 2D-S, CIP and PIP and these data are now available at:
https://drive.google.com/drive/folders/1ZiinnejLj_U04QBQTtJfzA90Or_tqDmU?usp=sharing
The simulations all use the same assumed exponential size distribution, and there are simulations for both drops and columns. An extra set of synthetic data for the HVPS will be available in the near future (they are taking longer to run because of the large sample volume), and Aaron will update them to the folder when they are complete. Another set of simulations assuming irregular ice particles will also be generated so that there are a range of particles that fit each instrument. These will be added once the HVPS simulations are complete.
In order to make the comparisons most meaningful, it is important that we all follow the same instructions for processing the data. As some of us did a similar exercise 6 years ago for the 2018 University of British Columbia workshop on evaluation of cloud probe processing software, I recommend that we follow the same instructions for this intercomparison. We will compare both particle by particle (PbP) and bulk parameter data. Note that I am attaching a poster presenting at the 2018 AMS Cloud Physics Conference that describes the results of our previous intercomparison.
First, please process the data and provide the PbP data for each particle including the following: time, particle count in buffer, width (W), length (L), projected area (A), particle maximum dimension (D), perimeter (P), and a flag for whether the end diodes are shadowed (i.e., 0 1 or 2 depending on whether no, one, or both photodiodes are shadowed). For the first step we will only use the length of the particle along the photodiode array in the comparison (i.e., the width W). However, to make intercomparison easier at later steps, please include all the aforementioned parameters in the file. Note that there are different definitions of particle maximum dimension (i.e., time direction, diode direction, maximum of two, minimum circle completing enclosing particle, area-equivalent diameter, etc.: See Wu and McFarquhar 2016 for a list of possible metrics that can be used to characterize the dimension of the particle): feel free to use whatever dimension you typically use.
Thus, in summary the instructions for the files to provide for the intercomparison are as follows (repeat for each file):
(Assume All-in Particles
(Assume Particles not All-in
(Assume sum of the above two
(Assume we need to use “water-processing flag in SODA)
(Assume All-in Particles
(Assume Particles not All-in
(Assume sum of the above two
It is possible we might not have time to complete all steps before the workshop. Please send your files after we complete each step so that we can intercompare the distributions on a step-by-step basis. We anticipate that these comparisons will be taking place continuously before we meet in Jeju to discuss the final comparisons. It will be very helpful to start receiving files soon.
It would be best if you could provide netcdf files with the output from your code. As there are two sets of files for each probe (2DS, HVPS-3, CIP, PIP), one with simulated round drops and another for simulated columns. It would be best to supply separate files for each probe. To allow us to more easily interpret who supplied what files, we recommend the following naming convention for your supplied files: XXX_PPP_Z_S#_YY.cdf where XXX is a unique 3 digit code (e.g., UIO for University of Illinois/Oklahoma software), PPP represents the probe (i.e., 2DS, HVP, CIP or PIP), Z represents shape (i.e., either C for columns or R for round particles), S# represents the step number (i.e., use S1 for step 1 giving counts along photodiode array) and YY represents type of particles included (i.e., CO for complete, PA for partial and CP for complete + partial).
Ideally we would like the PbP files by the end of March, and the sized distribution files by the end of April so that there is lots of time to perform the comparison before the Jeju workshop (and get supplementary information if needed). TOnce we start receiving files, we will start generating the intercomparison plots and let you know any pertinent information as we move to subsequent steps, and there is always a possibility additional information might be needed.
Please place your processed files on: https://drive.google.com/drive/folders/102GSajs4nEwh48OJvUzcuS00dbClBEaj?usp=sharing . Please send an email to Mastooreh Ameri (maameri@ou.edu) when you have placed files there so that she can check the files.
UPDATE: I've also been requested to add an example file with the PSD products (Steps 2-8) so that they can be helpful as a guide. I'm providing a link here to the 2018 drive that we used for the comparison, that shows good examples of the data files. https://drive.google.com/drive/folders/1uhTJJRcqXLFpz0z6QKXgqlPJnf-MTT27