From 33e81f04d46f42838079ba624e6d1181a9f28175 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Thu, 19 Sep 2024 14:27:38 -0600 Subject: [PATCH] Update develop-ref after MET#2975 (#2977) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * 2673 Moved dvariable declaration after include * #2673 Move down namespace below include * Feature #2395 wdir (#2820) * Per #2395, add new columns to VL1L2, VAL1L2, and VCNT line types for wind direction statistics. Work still in progress. * Per #2395, write the new VCNT columns to the output and document the additions to the VL1L2, VAL1L2, and VCNT columns. * Per #2395, add the definition of new statistics to Appendix G. * Per #2395, update file version history. * Per #2395, tweak warning message about zero wind vectors and update grid-stat and point-stat to log calls to the do_vl1l2() function. * Per #2395, refine the weights for wind direction stats, ignoring the undefined directions. * Update src/tools/core/stat_analysis/aggr_stat_line.cc * Update src/tools/core/stat_analysis/parse_stat_line.cc * Update src/tools/core/stat_analysis/aggr_stat_line.cc * Recent changes to branch protection rules for the develop branch have broken the logic of the update_truth.yml GHA workflow. Instead of submitting a PR to merge develop into develop-ref directly, use an intermediate update_truth_for_develop branch. * Feature #2280 ens_prob (#2823) * Per #2280, update to support probability threshold strings like ==8, where 8 is the number of ensemble members, to create probability bins centered on the n/8 for n = 0 ... 8. * Per #2280, update docs about probability threshold settings. * Per #2280, use a loose tolerance when checking for consistent bin widths. * Per #2280, add a new unit test for grid_stat to demonstrate processing the output from gen_ens_prod. * Per #2280, when verifying NMEP probability forecasts, smooth the obs data first. * Per #2280, only request STAT output for the PCT line type to match unit_grid_stat.xml and minimize the new output files. * Per #2280, update config option docs. * Per #2280, update config option docs. * #2673 Change 0 to nullptr * #2673 Change 0 to nullptr * #2673 Change 0 to nullptr * #2673 Change 0 to nullptr * #2673 Change 0 to nullptr * #2673 Removed the redundant parentheses with return * #2673 Removed the redundant parentheses with return * #2673 Removed the redundant parentheses with return * #2673 Removed the redundant parentheses with return * #2673 Removed the redundant parentheses with return * #2673 restored return statement * #2673 Added std namespace * #2673 Moved down 'using namespace' statement. Removed trailing spaces * #2673 Moved down 'using namespace' statement. * #2673 Moved down 'using namespace' statement. * #2673 Moved down 'using namespace' statement. * #2673 Moved down 'using namespace' statement. * #2673 Added std namespace * #2673 Added std namespace * #2673 Added std namespace * #2673 Changed literal 1 to boolean value, true * Feature #2673 enum_to_string (#2835) * Feature #2583 ecnt (#2825) * Unrelated to #2583, fix typo in code comments. * Per #2583, add hooks write 3 new ECNT columns for observation error data. * Per #2583, make error messages about mis-matched array lengths more informative. * Per #2583, switch to more concise variable naming conventions of ign_oerr_cnv, ign_oerr_cor, and dawid_seb. * Per #2583, fix typo to enable compilation * Per #2583, define the 5 new ECNT column names. * Per #2583, add 5 new columns to the ECNT table in the Ensemble-Stat chapter * Per #2583, update stat_columns.cc to write these 5 new ECNT columns * Per #2583, update ECNTInfo class to compute the 5 new ECNT statistics. * Per #2583, update stat-analysis to parse the 5 new ECNT columns. * Per #2583, update aggregate_stat logic for 5 new ECNT columns. * Per #2583, update PairDataEnsemble logic for 5 new ECNT columns * Per #2583, update vx_statistics library with obs_error handling logic for the 5 new ECNT columns * Per #2583, changes to make it compile * Per #2583, changes to make it compile * Per #2583, switch to a consistent ECNT column naming convention with OERR at the end. Using IGN_CONV_OERR and IGN_CORR_OERR. * Per #2583, define ObsErrorEntry::variance() with a call to the dist_var() utility function. * Per #2583, update PairDataEnsemble::compute_pair_vals() to compute the 5 new stats with the correct inputs. * Per #2583, add DEBUG(10) log messages about computing these new stats. * Per #2583, update Stat-Analysis to compute these 5 new stats from the ORANK line type. * Per #2583, whitespace and comments. * Per #2583, update the User's Guide. * Per #2583, remove the DS_ADD_OERR and DS_MULT_OERR ECNT columns and rename DS_OERR as DSS, since observation error is not actually involved in its computation. * Per #2583, minor update to Appendix C * Per #2583, rename ECNT line type statistic DSS to IDSS. * Per #2583, fix a couple of typos * Per #2583, more error checking. * Per #2583, remove the ECNT IDSS column since its just 2*pi*IGN, the existing ignorance score, and only provides meaningful information when combined with the other Dawid-Sebastiani statistics that have already been removed. * Per #2583, add Eric's documentation of these new stats to Appendix C. Along the way, update the DOI links in the references based on this APA style guide: https://apastyle.apa.org/style-grammar-guidelines/references/dois-urls#:~:text=Include%20a%20DOI%20for%20all,URL%2C%20include%20only%20the%20DOI. * Per #2583, fix new equations with embedded underscores for PDF by defining both html and pdf formatting options. * Per #2583, update the ign_conv_oerr equation to include a 2 *pi multiplier for consistency with the existing ignorance score. Also, fix the documented equations. * Per #2583, remove log file that was inadvertently added on this branch. * Per #2583, simplify ObsErrorEntry::variance() implementation. For the distribution type of NONE, return a variance of 0.0 rather than bad data, as discussed with @michelleharrold and @JeffBeck-NOAA on 3/8/2024. --------- Co-authored-by: MET Tools Test Account * Revert #2825 since more documentation and testing is needed (#2837) This reverts commit 108a8958b206d6712197823a083666ab039bf818. * Feature #2583 ecnt fix IGN_OERR_CORR (#2838) * Unrelated to #2583, fix typo in code comments. * Per #2583, add hooks write 3 new ECNT columns for observation error data. * Per #2583, make error messages about mis-matched array lengths more informative. * Per #2583, switch to more concise variable naming conventions of ign_oerr_cnv, ign_oerr_cor, and dawid_seb. * Per #2583, fix typo to enable compilation * Per #2583, define the 5 new ECNT column names. * Per #2583, add 5 new columns to the ECNT table in the Ensemble-Stat chapter * Per #2583, update stat_columns.cc to write these 5 new ECNT columns * Per #2583, update ECNTInfo class to compute the 5 new ECNT statistics. * Per #2583, update stat-analysis to parse the 5 new ECNT columns. * Per #2583, update aggregate_stat logic for 5 new ECNT columns. * Per #2583, update PairDataEnsemble logic for 5 new ECNT columns * Per #2583, update vx_statistics library with obs_error handling logic for the 5 new ECNT columns * Per #2583, changes to make it compile * Per #2583, changes to make it compile * Per #2583, switch to a consistent ECNT column naming convention with OERR at the end. Using IGN_CONV_OERR and IGN_CORR_OERR. * Per #2583, define ObsErrorEntry::variance() with a call to the dist_var() utility function. * Per #2583, update PairDataEnsemble::compute_pair_vals() to compute the 5 new stats with the correct inputs. * Per #2583, add DEBUG(10) log messages about computing these new stats. * Per #2583, update Stat-Analysis to compute these 5 new stats from the ORANK line type. * Per #2583, whitespace and comments. * Per #2583, update the User's Guide. * Per #2583, remove the DS_ADD_OERR and DS_MULT_OERR ECNT columns and rename DS_OERR as DSS, since observation error is not actually involved in its computation. * Per #2583, minor update to Appendix C * Per #2583, rename ECNT line type statistic DSS to IDSS. * Per #2583, fix a couple of typos * Per #2583, more error checking. * Per #2583, remove the ECNT IDSS column since its just 2*pi*IGN, the existing ignorance score, and only provides meaningful information when combined with the other Dawid-Sebastiani statistics that have already been removed. * Per #2583, add Eric's documentation of these new stats to Appendix C. Along the way, update the DOI links in the references based on this APA style guide: https://apastyle.apa.org/style-grammar-guidelines/references/dois-urls#:~:text=Include%20a%20DOI%20for%20all,URL%2C%20include%20only%20the%20DOI. * Per #2583, fix new equations with embedded underscores for PDF by defining both html and pdf formatting options. * Per #2583, update the ign_conv_oerr equation to include a 2 *pi multiplier for consistency with the existing ignorance score. Also, fix the documented equations. * Per #2583, remove log file that was inadvertently added on this branch. * Per #2583, simplify ObsErrorEntry::variance() implementation. For the distribution type of NONE, return a variance of 0.0 rather than bad data, as discussed with @michelleharrold and @JeffBeck-NOAA on 3/8/2024. * Per #2583, updates to ensemble-stat.rst recommended by @michelleharrold and @JeffBeck-NOAA. * Per #2583, implement changes to the IGN_CORR_OERR corrected as directed by @ericgilleland. --------- Co-authored-by: MET Tools Test Account * Update the pull request template to include a question about expected impacts to existing METplus Use Cases. * #2830 Changed enum Builtin to enum class * #2830 Converted enum to enum class at config_constants.h * Feature #2830 bootstrap enum (#2843) * Bugfix #2833 develop azimuth (#2840) * Per #2833, fix n-1 bug when defining the azimuth delta for range/azimuth grids. * Per #2833, when definng TcrmwData:range_max_km, divide by n_range - 1 since the range values start at 0. * Per #2833, remove max_range_km from the TC-RMW config file. Set the default rmw_scale to NA so that its not used by default. And update the documentation. Still actually need to make the logic of the code work as it should. * Per #2833, update tc_rmw to define the range as either a function of rmw or using explicit spacing in km. * Per #2833, update the TCRMW Config files to remove the max_range_km entry, and update the unit test for one call to use RMW ranges and the other to use ranges defined in kilometers. * Per #2833, just correct code comments. * Per #2833, divide by n - 1 when computing the range delta, rather than n. * Per #2833, correct the handling of the maximum range in the tc-rmw tool. For fixed delta km, need to define the max range when setting up the grid at the beginning. --------- Co-authored-by: MET Tools Test Account * #2830 Changed enum PadSize to enum class * #2830 Removed redundant parantheses * #2830 Removed commenyted out code * #2830 Use auto * #2830 Changed enum to enum class for DistType, InterpMthd, GridTemplates, and NormalizeType * #2830 Moved enum_class_as_integer from header file to cc files * #2830 Added enum_as_int.hpp * #2830 Added enum_as_int.hpp * Deleted enum_class_as_integer and renamed it to enum_class_as_int * Removed redundant paranthese * #2830 Changed enum to enumclass * #2830 Changed enum_class_as_integer to enum_class_as_int * Feature #2379 sonarqube gha (#2847) * Per #2379, testing initial GHA SonarQube setup. * Per #2379, switch to only analyzing the src directory. * Per #2379, move more config logic from sonar-project.properties into the workflow. #ci-skip-all * Per #2379, try removing + symbols * Per #2379, move projectKey into xml workflow and remove sonar-project.properties. * Per #2379, try following the instructions at https://github.com/sonarsource-cfamily-examples/linux-autotools-gh-actions-sq/blob/main/.github/workflows/build.yml ci-skip-all * Per #2379, see details of progress described in this issue comment: https://github.com/dtcenter/MET/issues/2379#issuecomment-2000242425 * Unrelated to #2379, just removing spurious space that gets flagged as a diff when re-running enum_to_string on seneca. * Per #2379, try running SonarQube through GitHub. * Per #2379, remove empty env section and also disable the testing workflow temporarily during sonarqube development. * Per #2379, fix docker image name. * Per #2379, delete unneeded script. * Per #2379, update GHA to scan Python code and push to the correct SonarQube projects. * Per #2379, update GHA SonarQube project names * Per #2379, update the build job name * Per #2379, update the comile step name * Per #2379, switch to consistent SONAR variable names. * Per #2379, fix type in sed expressions. * Per #2379, just rename the log artifact * Per #2379, use time_command wrapper instead of run_command. * Per #2379, fix bad env var name * Per #2379, switch from egrep to grep. * Per #2379, just try cat-ting the logfile * Per #2379, test whether cat-ting the log file actually works. * Per #2379, revert back * Per #2379, mention SonarQube in the PR template. Make workflow name more succinct. * Per #2379, add SONAR_REFERENCE_BRANCH setting to define the sonar.newCode.referenceBranch property. The goal is to define the comparison reference branch for each SonarQube scan. * Per #2379, have the sonarqube.yml job print the reference branch it's using * Per #2379, intentionally introduce a new code smell to see if SonarQube correctly flag it as appearing in new code. * Per #2379, trying adding the SonarQube quality gate check. * Per #2379, add logic for using the report-task.txt output files to check the quality gate status for both the python and cxx scans. * Per #2379 must use unique GHA id's * Per #2379, working on syntax for quality gate checks * Per #2379, try again. * Per #2379, try again * Per #2379, try again * Per #2379, try again * Per #2379, try again * Per #2379, try again * Per #2379, try yet again * Per #2379 * Per #2379, add more debug * Per #2379, remove -it option from docker run commands * Per #2379, again * Per #2379, now that the scan works as expected, remove the intentional SonarQube code smell as well as debug logging. * Hotfix related to #2379. The sonar.newCode.referenceBranch and sonar.branch.name cannot be set to the same string! Only add the newCode definition when they differ. * #2830 Changed enum STATJobType to enum class * #2830 Changed STATLineType to enum class * #2830 Changed Action to enum class * #2830 Changed ModeDataType to enum class * #2830 Changed StepCase to enum class * #2830 Changed enum to enum class * #2830 Changed GenesisPairCategory to enum class * #2830 Removed rediundabt parenrthese * #2830 Reduced same if checking * #2830 Cleanup * #2830 USe empty() instead of lebgth checking * #2830 Adjusted indentations * Feature #2379 develop sonarqube updates (#2850) * Per #2379, move rgb2ctable.py into the python utility scripts directory for better organization and to enable convenient SonarQube scanning. * Per #2379, remove point.py from the vx_python3_utils directory which cleary was inadvertenlty added during development 4 years ago. As far as I can tell it isn't being called by any other code and doesn't belong in the repository. Note that scripts/python/met/point.py has the same name but is entirely different. * Per #2379, update the GHA SonarQube scan to do a single one with Python and C++ combined. The nightly build script is still doing 2 separate scans for now. If this all works well, they could also be combined into a single one. * Per #2379, eliminate MET_CONFIG_OPTIONS from the SonarQube workflow since it doesn't need to be and probably shouldn't be configurable. * Per #2379, trying to copy report-task.txt out of the image * Per #2379, update build_met_sonarqube.sh to check the scan return status * Per #2379, fix bash assignment syntax * Per #2379, remove unused SCRIPT_DIR envvar * Per #2379, switch to a single SonarQube scan for MET's nightly build as well * Feature 2654 ascii2nc polar buoy support (#2846) * Added iabp data type, and modified file_handler to filter based on time range, which was added as a command line option * handle time using input year, hour, min, and doy * cleanup and switch to position day of year for time computations * Added an ascii2nc unit test for iabp data * Added utility scripts to pull iabp data from the web and find files in a time range * Modified iabp_handler to always output a placeholder 'location' observation with value 1 * added description of IABP data python utility scripts * Fixed syntax error * Fixed Another syntax error. * Slight reformat of documentation * Per #2654, update the Makefiles in scripts/python/utility to include all the python scripts that should be installed. * Per #2654, remove unused code from get_iabp_from_web.py that is getting flagged as a bug by SonarQube. * Per #2654, fix typo in docs --------- Co-authored-by: John Halley Gotway Co-authored-by: MET Tools Test Account * Feature #2786 rpss_from_prob (#2861) * Per #2786, small change to a an error message unrelated to this development. * Per #2786, add RPSInfo::set_climo_prob() function to derive the RPS line type from climatology probability bins. And update Ensemble-Stat to call it. * Per #2786, minor change to clarify error log message. * Per #2786, for is_prob = TRUE input, the RPS line type is the only output option. Still need to update docs! * Per #2786, add new call to Ensemble-Stat to test computing RPS from climo probabilities * Per #2786, use name rps_climo_bin_prob to be very explicit. * Per #2786, redefine logic of RPSInfo::set_climo_bin_prob() to match the CPC definition. Note that reliability, resolution, uncertainty, and RPSS based on the sample climatology are all set to bad data. Need to investigate whether they can be computed using these inputs. * Per #2786, remove the requirement that any fcst.prob_cat_thresh thresholds must be defined. If they are defined, pass them through to the FCST_THRESH output column. If not, write NA. Add check to make sure the event occurs in exactly 1 category. * Per #2786, don't enforce fcst.prob_cat_thresh == obs.prob_cat_thresh for probabilistic inputs. And add more is_prob checks so that only the RPS line type can be written when given probabilistic inputs. * updated documentation * Per #2786, call rescale_probability() function to convert from 0-100 probs to 0-1 probs. --------- Co-authored-by: j-opatz * Feature #2862 v12.0.0-beta4 (#2864) * Feature #2379 develop single_sq_project (#2865) * Hotfix to the documentation in the develop branch. Issue #2858 was closed as a duplicate of #2857. I had included it in the MET-12.0.0-beta4 release notes, but the work is not yet actually complete. * Feature 2842 ugrid config (#2852) * #2842 Removed UGrid related setting * #2842 Corrected vertical level for data_plane_array * #2842 Do not allow the time range * #2842 The UGridConfig file can be passed as ugrid_dataset * #2842 Changed -config option to -ugrid_config * #2842 Deleted UGrid configurations * 2842 Fix a compile error when UGrid is disabled * #2842 Cleanup * #2842 Added an unittest point_stat_ugrid_mpas_config * #2842 Added a PointStatConfig without UGrid dataset. * #2842 Corrected ty[po at the variable name * Switched from time_centered to time_instant. I think time_centered is the center of the forecast lead window and time_instant is the time the forecast is valid (end of forecast window). * #2842 Removed ugrid_max_distance_km and unused metadata names * #2842 Restored time variable time_instant for LFric * #2842 Adjust lon between -180 and 180 * #2842 Adjust lon between -180 and 180 * #2842 Adjust lon between -180 and 180 * #2842 Adjusted lon to between -180 to 180 * #2842 Changed variable names * Per #2842, switch from degrees east to west right when the longitudes are read. * #2842, switch from degrees east to west right when the longitudes are read * #2842 Cleanup debug messages --------- Co-authored-by: Howard Soh Co-authored-by: Daniel Adriaansen Co-authored-by: John Halley Gotway * Feature 2753 comp script config (#2868) * set dynamic library file extension to .dylib if running on MacOS and .so otherwise * Added disabling of jasper documentation for compiliation on Hera * Updated * remove extra export of compiler env vars * include full path to log file so it is easier to file the log file to examine when a command fails * send cmake output to a log file * remove redundant semi-colon * use full path to log file so it is easier to examine on failure * use run_cmd to catch if rm command fails * Modifications for compilation on hera, gaea, and orion * Updating * fixed variable name * clean up if/else statements * set TIFF_LIBRARY_RELEASE argument to use full path to dynamic library file to prevent failure installing proj library * set LDFLAGS so that LDFLAGS value set in the user's environment will also be used * Updated based on gaea, orion, and hera installs * Updated * change extension of dynamic library files only if architecture is arm64 because older Macs still use .so * added netcdf library to args to prevent error installing NetCDF-CXX when PROJ has been installed in the same run of the script -- PATH is set in the COMPILE_PROJ if block that causes this flag from being added automatically * clean up how rpath and -L are added to LDFLAGS so that each entry is separate -- prevents errors installing on Mac arm64 because multiple rpath values aren't read using :. Also use MET_PROJLIB * Updated * removed -ltiff from MET libs * only add path to rpath and -L arguments if they are not already included in LDFLAGS * changed from using LIB_TIFF (full path to tiff lib file) to use TIFF_LIB_DIR (dir containing tiff lib file). Added TIFF_INCLUDE_DIR to proj compilation and -DJAS_ENABLE_DOC to jasper compliation taken from @jprestop branch * update comments * ensure all MET_* and MET_*LIB variables are added to the rpath for consistency * remove unnecessary if block and only export LDFLAGS at the end of setting locally * Updated * Added section for adding /lib64 and rearranged placement of ADDTL_DIR * Commenting out the running of the Jasper lib tests * Updating and/or removing files * Updating and/or removing files * Latest udpates which include the addition of the tiff library for proj * Remove commented out line. Co-authored-by: John Halley Gotway * Make indentation consistent. Co-authored-by: John Halley Gotway * Make indentation consistent. Co-authored-by: John Halley Gotway * Make indentation consistent. Co-authored-by: John Halley Gotway * Per 2753, added -lm to configure_lib_args for NetCDF-CXX * Per #2753 updating acorn files * Per #2753, update wcoss2 files * Per #2753, updating acorn file to include MET_PYTHON_EXE * Per #2753, updated files for 12.0.0 for derecho * Per #2753, updated derecho file adding MET_PYTHON_EXE and made corrections * Updating config files * Updating orion files * Updates for gaea's files * Updating gaea modulefile * Removing modulefile for cheyenne * Added MET_PYTHON_EXE * Added MET_PYTHON_EXE to hera too * Adding file for hercules * Removing equals sign from setenv * Adding file for hercules * Updated script to add libjpeg installation for grib2c * Per #2753, Adding file for casper --------- Co-authored-by: George McCabe <23407799+georgemccabe@users.noreply.github.com> Co-authored-by: John Halley Gotway * Feature #2795 level_mismatch_warning (#2873) * Per #2795, move the warning message about level mismatch from the config validation step to when the forecast files are being processed. Only check this when the number of forecast fields is greater than 1, but no longer limit the check to pressure levels only. * Per #2795, add comments * Whitespace * Per #2795, port level mismatch fix over to Ensemble-Stat. Check it for each verification task, but only print it once for each task, rather than once for each task * ensemble member. * Feature #2870 removing_MISSING_warning (#2872) * Per #2870, define utility functions for parsing the file type from a file list and for logging missing files, checking for the MISSING keyword. Also, update Ensemble-Stat and Gen-Ens-Prod to call these functions. * Per #2870, update the gen_ens_prod tests to demonstrate the use of the MISSING keyword for missing files. METplus uses this keyword for Ensemble-Stat and Gen-Ens-Prod. * Feature 2842 ugrid config (#2875) * #2842 Removed UGrid related setting * #2842 Corrected vertical level for data_plane_array * #2842 Do not allow the time range * #2842 The UGridConfig file can be passed as ugrid_dataset * #2842 Changed -config option to -ugrid_config * #2842 Deleted UGrid configurations * 2842 Fix a compile error when UGrid is disabled * #2842 Cleanup * #2842 Added an unittest point_stat_ugrid_mpas_config * #2842 Added a PointStatConfig without UGrid dataset. * #2842 Corrected ty[po at the variable name * Switched from time_centered to time_instant. I think time_centered is the center of the forecast lead window and time_instant is the time the forecast is valid (end of forecast window). * #2842 Removed ugrid_max_distance_km and unused metadata names * #2842 Restored time variable time_instant for LFric * #2842 Adjust lon between -180 and 180 * #2842 Adjust lon between -180 and 180 * #2842 Adjust lon between -180 and 180 * #2842 Adjusted lon to between -180 to 180 * #2842 Changed variable names * Per #2842, switch from degrees east to west right when the longitudes are read. * #2842, switch from degrees east to west right when the longitudes are read * #2842 Cleanup debug messages * #2842 Disabled output types except STAT for sl1l2 * #2842 Disabled output types except STAT for sl1l2 and MPR * #2842 Reduced output files for UGrid --------- Co-authored-by: Howard Soh Co-authored-by: Daniel Adriaansen Co-authored-by: John Halley Gotway * Hotfix to develop branch to remove duplicate test named 'point_stat_ugrid_mpas_config'. That was causing unit_ugrid.xml to fail because it was still looking for .txt output files that are no longer being generated. * Feature 2748 document ugrid (#2869) * Initial documentation of the UGRID capability. * Fixes error in references, adds appendix to index, and adds sub-section for configuration entries and a table for metadata map items. * Corrects LFRic, rewords section on UGRID conventions, updates description of using GridStat, and removes mention of nodes. * Forgot one more mention of UGRID conventions. * Incorporates more suggestions from @willmayfield. * Switches to numerical table reference. * Feature #2781 Convert MET NetCDF point obs to Pandas DataFrame (#2877) * Per #2781, added function to convert MET NetCDF point observation data to pandas so it can be read and modified in a python embedding script. Added example python embedding script * ignore python cache files * fixed function call * reduce cognitive complexity to satisfy SonarQube and add boolean return value to catch if function fails to read data * clean up script and add comments * replace call to object function that doesn't exist, handle exception when file passed to script cannot be read by the NetCDF library * rename example script * add new example script to makefiles * fix logic to build pandas DataFrame to properly get header information from observation header IDs * Per #2781, add unit test to demonstrate python embedding script that reads MET NetCDF point observation file and converts it to a pandas DataFrame * Per #2781, added init function for nc_point_obs to take an input filename. Also raise TypeError exception from nc_point_obs.read_data() if input file cannot be read * call parent class init function to properly initialize nc_point_obs * Feature #2833 pcp_combine_missing (#2886) * Per #2883, add -input_thresh command line option to configure allowable missing input files. * Per #2883, update pcp_combine usage statement. * Per #2883, update existing pcp_combine -derive unit test example by adding 3 new missing file inputs at the beginning, middle, and end of the file list. The first two are ignored since they include the MISSING keyword, but the third without that keyword triggers a warning message as desired. The -input_thresh option is added to only require 70% of the input files be present. This should produce the exact same output data. * Per #2883, update the pcp_combine logic for the sum command to allow missing data files based on the -input_thresh threshold. Add a test in unit_pcp_combine.xml to demonstrate. * Update docs/Users_Guide/reformat_grid.rst Co-authored-by: George McCabe <23407799+georgemccabe@users.noreply.github.com> * Per #2883, update pcp_combine usage statement in the code to be more simliar to the User's Guide. * Per #2883, switch to using derive_file_list_missing as the one containing missing files and recreate derive_file_list as it had existed for the test named pcp_combine_derive_VLD_THRESH. * Per #2883, move initialization inside the same loop to resolve SonarQube issues. * Per #2883, update sum_data_files() to switch from allocating memory to using STL vectors to satisfy SonarQube. * Per #2883, changes to declarations of variables to satisfy SonarQube. * Per #2883, address more SonarQube issues * Per #2883, backing out an unintended change I made to tcrmw_grid.cc. This change belongs on a different branch. * Per #2883, update logic of parse_file_list_type() function to handle python input strings. Also update pcp_combine to parse the type of input files being read and log non-missing python input files expected. --------- Co-authored-by: George McCabe <23407799+georgemccabe@users.noreply.github.com> * Per #2888, update STATAnalysisJob::dump_stat_line() to support dumping stat line types VCNT, RPS, DMAP, and SSIDX. (#2891) * Per #2659, making updates as proposed at the 20240516 MET Eng. Mtg. (#2895) * Feature #2395 TOTAL_DIR (#2892) * Per #2395, remove the n_dir_undef and n_dira_undef variables that are superceded by the new dcount and dacount VL1L2Info members to keep track of the number of valid wind direction vectors. * Per #2395, add TOTAL_DIR columns to the VL1L2, VAL1L2, and VCNT line types and update the header column tables. * Per #2395, update the User's Guide to list the new TOTAL_DIR columns in the VL1L2, VAL1L2, and VCNT line types. * Per #2395, update stat_analysis to parse the new TOTAL_DIR columns and use the values to aggregate results when needed. * Per #2395, for SonarQube change 'const char *' to 'const char * const' to satisfy the finding that 'Global variables should be const.' Should probably switch from 'char char *' to strings eventually. But for now, I'm just making up for some SonarQube technical debt. * Per #2395, fix typo in placement of the DIR_ME column name in the met_header_columns_V12.0.txt file * Per #2395, add 2 new Stat-Analysis jobs to demonstrate the processing of VL1L2 lines. * Per #2395, update logic of is_vector_dir_stat(). Instead of just checking 'DIR_', check 'DIR_ME', 'DIR_MAE', and 'DIR_MSE' to avoid an false positive match for the 'DIR_ERR' column which is computed from the vector partial sums rather than the individual direction differences. * Bugfix #2897 develop python_valid_time (#2899) * Per #2897, fix typos in 2 log messages. Also fix the bug in storing the valid time strings. The time string in vld_array should exactly correspond to the numeric unixtime values in vld_num_array. Therefore they need to be updated inside the same if block. The bug is that we were storing only the unique unixtime values but storing ALL of the valid time string, not just the unique ones. * Per #2897, minor change to formatting of log message * MET #2897, don’t waste time searching, just set the index to n - 1 * Per #2897, remove unused add_prec_point_obs(...) function * Per #2897, update add_point_obs(...) logic for DEBUG(9) to print very detailed log messages about what obs are being rejected and which are being used for each verification task. * Per #2897, refine the 'using' log message to make the wording consistent with the summary rejection reason counts log message * Per #2897, update the User's Guide about -v 9 for Point-Stat --------- Co-authored-by: j-opatz Co-authored-by: MET Tools Test Account * Bugfix 2867 point2grid qc flag (#2890) * #2867 Added compute_adp_qc_flag and adjusted ADP QC flags * #2867 Added point2grid_GOES_16_ADP_Enterprise_high. Changed AOD QC flags to 0,1,2 (was 1,2,3) * #2867 Added get_nc_att_values_ * #2867 Added get_nc_att_values. Added the argument allow_conversion to get_nc_data(netCDF::NcVar *, uchar *data) * #2867 Read the ADP QC flag values and meanings attributes from DQF variable and set the QC high, meduium, low values to support Enterprise algorithm. Adjusted the ADP QC values by using AOD qc values * #2867 Cleanup * #2867 Corrected indent * #2867 Changed log message * #2867 Removed unused argument * #2867 Removed unused argument * Cleanup * #2867 Fix SonarQube findings * #2867 Deleted protected section with no members * #2867 Cleanup * #2867 FIxed SonarQube findings; unused local variables, decalare as const, etc * #2867 MOved include directives to top * #2867 Changed some argumenmt with references to avoid copying objects * #2867 Do not filter by QC flag if -qc is not given * #2867 Use enumj class for GOES QC: HIGH, MEDIUM, and LOW * #2867 Added log message back which were deleted accidently * #2867 Chaned statci const to constexpr * #2867 Initial release. Separated from nc_utils.h * @2867 Added nc_utils_core.h * #2867 Moved some blocks to nc_utils_core.h * #2867 Include nc_utils_core.h * #2867 Added const references * Per #2867, fixing typo in comments. --------- Co-authored-by: Howard Soh Co-authored-by: j-opatz * Hotfix to develop to fix the update_truth.yml workflow logic. This testing workflow run failed (https://github.com/dtcenter/MET/actions/runs/9209471209). Here we switch to a unique update truth branch name to avoid conflicts. * Avoid pushing directly to the develop or main_vX.Y branches since that is not necessary for the automation logic in MET. * #2904 Changed R path to R-4.4.0 (#2905) Co-authored-by: Howard Soh * Feature #2912 pb2nc error (#2914) * Feature 2717 convert unit.pl to unit.py (#2871) * created unit.py module in new internal/test_unit/python directory * added xml parsing to unit.py * added repl_env function * added reading of the remaining xml tags in build_tests function * progress on main function (putting together test commands) * a few more lines in the main function * minor updates * fixed how the test command was being run * added if name/main and command line parsing * fixed handling of no 'env' in cmd_only mode * handle params from xml that have \ after filename without space in between * added logging * added some more pieces to unit * more updates to unit.py, including running checks on output files * bug fixes, improved handling of output file names, improved handling of env vars, improved logging output * fixed how shell commands are run, and other minor fixes * added last bits from the perl script, fixed some bugs * created unit.py module in new internal/test_unit/python directory * added xml parsing to unit.py * added repl_env function * added reading of the remaining xml tags in build_tests function * progress on main function (putting together test commands) * a few more lines in the main function * minor updates * update scripts to call python unit test script instead of the old perl script * fix she-bang line to allow script to be run without python3 before it * add missing test_dir and exit_on_fail tags that are found in the rest of the unit test xml files * fix call to logger.warning * change tags named 'exists' to 'exist' to match the rest of the xml files * added logger to function * removed tab at end of line that was causing output file path to be excluded from the command * fix broken checks for output files * incorporated george's recommended changes * changed default to overwrite logs; allow for more than one xml file to be passed in command --------- Co-authored-by: Natalie babij Co-authored-by: Natalie babij Co-authored-by: Natalie babij Co-authored-by: Natalie Babij Co-authored-by: John Halley Gotway Co-authored-by: George McCabe <23407799+georgemccabe@users.noreply.github.com> Co-authored-by: j-opatz * Bugfix 2867 point2grid qc unittest (#2913) * #2867 Added compute_adp_qc_flag and adjusted ADP QC flags * #2867 Added point2grid_GOES_16_ADP_Enterprise_high. Changed AOD QC flags to 0,1,2 (was 1,2,3) * #2867 Added get_nc_att_values_ * #2867 Added get_nc_att_values. Added the argument allow_conversion to get_nc_data(netCDF::NcVar *, uchar *data) * #2867 Read the ADP QC flag values and meanings attributes from DQF variable and set the QC high, meduium, low values to support Enterprise algorithm. Adjusted the ADP QC values by using AOD qc values * #2867 Cleanup * #2867 Corrected indent * #2867 Changed log message * #2867 Removed unused argument * #2867 Removed unused argument * Cleanup * #2867 Fix SonarQube findings * #2867 Deleted protected section with no members * #2867 Cleanup * #2867 FIxed SonarQube findings; unused local variables, decalare as const, etc * #2867 MOved include directives to top * #2867 Changed some argumenmt with references to avoid copying objects * #2867 Do not filter by QC flag if -qc is not given * #2867 Use enumj class for GOES QC: HIGH, MEDIUM, and LOW * #2867 Added log message back which were deleted accidently * #2867 Chaned statci const to constexpr * #2867 Initial release. Separated from nc_utils.h * @2867 Added nc_utils_core.h * #2867 Moved some blocks to nc_utils_core.h * #2867 Include nc_utils_core.h * #2867 Added const references * #2867 Some 'static const' were chnaged to constexpr * #2867 Changed -qc options (1,2,3 to 0,1 - high & medium) for AOD * #2867 Merged develop branch * #2867 Corrected the unit test name --------- Co-authored-by: Howard Soh * Feature #2911 tc_stat_set_hdr (#2916) * Per #2911, no real changes for Stat-Analysis. Just changing order of variables for consistency. * Per #2911, add StatHdrColumns::apply_set_hdr_opts(...) function to be used by TC-Stat. * Per #2911, move ByColumn to the TCStatJob base class and add HdrName and HdrValue to support the -set_hdr job command. * Per #2911, update GSI tools to call the newly added StatHdrColumns::apply_set_hdr_opts(...) function. * Per #2911, update logic of Stat-Analysis for consistency to make use of common apply_set_hdr_opts() function. * Per #2911, add DataLine::set_item() function to support -set_hdr options. * Per #2911, just update contents of error message * Per #2911, add TCStatLine member functions for has() and get_offset(). * Per #2911, update tc_stat to support applying -set_hdr to TC-Stat filter jobs. * Per #2911, revise TC-Stat config files to exercise the -set_hdr job command option * Per #2911, update TC-Stat documentation to mention the -set_hdr job command option * Per #2911, add note * Per #2911, as recommended by SonarQube, make some of these member functions const. * Bugfix #2856 develop ens_climo (#2918) * Per #2856, port over fixes from main_v11.1 to develop. * Per #2856, correct conditionals in set_job_controls.sh and tweak existing Ensemble-Stat configuration file to exercise the logic that's being impacted here. * Bugfix #2841 develop tang_rad_winds (#2921) * Per #2841, port over fixes from bugfix_2841_main_v11.1_tang_rad_winds for the develop branch * Per #2841, clarify in the docs that azimuths are defined in degrees counter-clockwise from due east. * Per #2841, just updating with output from enum_to_string. * Per #2841, tweak the documentation. * Per #2841, correct the location of using namespace lines. * Per #2841, update compute_tc_diag.py to no longer skip writing the radial and tangential wind diagnostics. * Per #2841, update compute_tc_diag.py to no longer skip writing radial and tangential wind diagnostics. * Revert "Per #2841, update compute_tc_diag.py to no longer skip writing radial and tangential wind diagnostics." This reverts commit f097345bedcfcca663e8fb4322eed5b5e00e19fd. * Revert "Per #2841, update compute_tc_diag.py to no longer skip writing the radial and tangential wind diagnostics." This reverts commit c0402151b038c59efab99c060cc5c390edf002f6. * Per #2841, update comp_dir.sh logic to include .dat in the files that are diffed * Replace tab with spaces * Per #2841, correct the units for the azimuth netcdf output variable * Per #2841, reverse the x dimension of the rotated latlon grid to effectively switch from counterclockwise rotation to clockwise. --------- Co-authored-by: MET Tools Test Account * Feature #2601 seeps climo config (#2927) * #2601 Added seeps_grid_climo_name and seeps_point_climo_name * #2601 Added seeps_grid_climo_name * #2601 Removed SEEPS settings * #2601 Initial release * #2601 Changed to set the SEEPS climo by using the configuration * #2601 Removed SEESP settings at PointStatConfig_APCP and use PointStatConfig_SEEPS for SEEPSm testing * #2601 Updated descryption for seeps_grid_climo_name * #2601 Added a argument for the SEEPS clomo file * #2601 Added conf_key_seeps_grid_climo_name and conf_key_seeps_point_climo_name * #2601 Support the climo filename from the configuration * #2601 Corrected key for climo name * Removing duplicate word --------- Co-authored-by: Howard Soh Co-authored-by: Julie Prestopnik * Feature 2673 sonarqube beta5 redundant parentheses (#2930) * #2673 Removed redundant_parentheses * #2673 Removed redundant_parentheses * #2673 Removed redundant parentheses * #2673 Removed redundant parentheses --------- Co-authored-by: Howard Soh * Fix release checksum action (#2929) * Feature 2857 tripolar coordinates (#2928) * #2857 Added MetNcCFDataFile::build_grid_from_lat_lon_vars * #2857 Added NcCfFile::build_grid_from_lat_lon_vars * #2857 Check the coordinates attribute to find latitude, longitude, and time variables * #2857 Get the lat/lon variables from coordinates attribute if exists * #2857 Added two constants * #2857 Deleted debug messages * #2857 Added lat_vname and lon_vname for var_name_map * #2857 Added two unit tests: point2grid_sea_ice_tripolar and point2grid_sea_ice_tripolar_config * #2857 Initial release * #2857 Correct dictinary to get file_type * #2857 DO not check the time variable for point2grid * #2857 Added point2grid_tripolar_rtofs --------- Co-authored-by: Howard Soh * Feature 2932 v12.0.0-beta5 (#2933) * Per #2932, updating version and release notes * Per #2932, updating date on release notes * Per #2932, fixed formatting and links * Update release-notes.rst * Update release-notes.rst Removing inline backticks since they do not format the way I expected, especially when put inside bolded release notes. --------- Co-authored-by: John Halley Gotway * Feature fix release notes (#2934) * Fixing up release notes * Update release-notes.rst --------- Co-authored-by: John Halley Gotway * Per dtcenter/METplus#2643 discussion, add more detail about the budget interpolation method. * Feature #2924 fcst climo, PR 1 of 2 (#2939) * Per #2924, Update the MPR and ORANK output line types to just write duplicate existing climo values, update the header tables and MPR/ORANK documentation tables. * Per #2924, update get_n_orank_columns() logic * Per #2924, update the Stat-Analysis parsing logic to parse the new MPR and ORANK climatology columns. * Per #2924, making some changes to the vx_statistics library to store climo data... but more work to come. Committing this first set of changes that are incomplete but do compile. * Per #2924, this big set of changes does compile but make test produces a segfault for ensemble-stat * Per #2924, fix return value for is_keeper_obs() * Per #2924, move fcst_info/obs_info into the VxPairBase pointer. * Per #2924, update Ensemble-Stat to set the VxPairBase::fcst_info pointer * Per #2924 udpate handling of fcst_info and obs_info pointers in Ensemble-Stat * Per #2924, update the GSI tools to handle the new fcst climo columns. * Per #2924, add backward compatibility logic so that when old climo column names are requested, the new ones are used. * Per #2924, print a DEBUG(2) log message if old column names are used. * Per #2924, switch the unit tests to reference the updated MPR column names rather than the old ones. * Per #2924, working progress. Not fully compiling yet * Per #2924, another round of changes. Removing MPR:FCST_CLIMO_CDF output column. This compiles but not sure if it actually runs yet * Per #2924, work in progress * Per #2924, work in progress. Almost compiling again. * Per #2924, get it compiling * Per #2924, add back in support for SCP and CDP which are interpreted as SOCP and OCDP, resp * Per #2924, update docs about SCP and CDP threshold types * Per #2924, minor whitespace changes * Per #2924, fix an uninitialized pointer bug by defining/calling SeepsClimoGrid::init_from_scratch() member function. The constructor had been calling clear() to delete pointers that weren't properly initialized to nullptr. Also, simplify some map processing logic. * Per #2924, rename SeepsAggScore from seeps to seeps_agg for clarity and to avoid conflicts in member function implementations. * Per #2924, fix seeps compilation error in Point-Stat * Per #2924, fix bug in the boolean logic for handling the do_climo_cdp NetCDF output option. * Per #2924, add missing exit statement. * Per #2924, tweak threshold.h * Per #2924, define one perc_thresh_info entry for each enumerated PercThreshType value * Per #2924, simplify the logic for handling percentile threshold types and print a log message once when the old versions are still used. * Per #2924, update the string comparison return value logic * Per #2924, fix the perc thresh string parsing logic by calling ConcatString::startswith() * Per #2924, switch all instances of CDP to OCDP. Gen-Ens-Prod was writing NetCDF files with OCDP in the output variable names, but Grid-Stat was requesting that the wrong variable name be read. So the unit tests failed. * Per #2924, add more doc details * Per #2924, update default config file to indicate when climo_mean and climo_stdev can be set seperately in the fcst and obs dictionaries. * Per #2924, update the MET tools to parse climo_mean and climo_stdev separately from the fcst and obs dictionaries. * Per #2924, backing out new/modified columns to minimize reg test diffs * Per #2924, one more section to be commented out later. * Per #2924, replace several calls to strncmp() with ConcatString::startswith() to simplify the code * Per #2924, strip out some more references to OBS_CLIMO_... in the unit tests. * Per #2924, delete accidental file * Per #2924 fix broken XML comments * Per #2924, fix comments * Per #2924, address SonarQube findings * Per #2924, tweak a Point-Stat and Grid-Stat unit test config file to make the output more comparable to develop. * Per #2924, fix bug in the logic of PairDataPoint and PairDataEnsemble, when looping over the 3-dim array do not return when checking the climo and fcst values. Instead we need to continue to the next loop iteration. * Per #2924, address more SonarQube code smells to reduce the overall number in MET for this PR. * Per #2924, correct the logic for parsing climo data from MPR lines. * Per #2924, cleanup grid_stat.cc source code by making calls to DataPlane::is_empty() and Grid::nxy(). * Per #2924, remove unneeded ==0 * Hotfix to the develop branch for a copy/paste bug introduced by PR #2939 * Feature #2924 sal1l2_mae, PR 3 of 3 (#2943) * Per #2924, track SL1L2 and SAL1L2 MAE scores with separate variables since they are no longer the same value. I renamed the existing 'mae' as 'smae' and added a new 'samae' variable. Renaming the existing lets me use the compiler help find all references to it throughout the code. * Per #2924, update the User's Guide climatology details and equations. * Per #2924, some changes to aggr_stat_line.cc and series_analysis.cc to satisfy some SonarQube code smells. * Update develop to clarify masking poly options based on METplus Discussion dtcenter/METplus#2650 * Remove two semi-colons that are not actually necessary to avoid confusion. * Per dtcenter/METplus#2653 discussion, update the MTD usage statement to clarify that data specified in the fcst dictionary is read from the -single input files. * Feature #2924 fcst climo, PR 2 of 3 (#2942) * Per #2924, Update the MPR and ORANK output line types to just write duplicate existing climo values, update the header tables and MPR/ORANK documentation tables. * Per #2924, update get_n_orank_columns() logic * Per #2924, update the Stat-Analysis parsing logic to parse the new MPR and ORANK climatology columns. * Per #2924, making some changes to the vx_statistics library to store climo data... but more work to come. Committing this first set of changes that are incomplete but do compile. * Per #2924, this big set of changes does compile but make test produces a segfault for ensemble-stat * Per #2924, fix return value for is_keeper_obs() * Per #2924, move fcst_info/obs_info into the VxPairBase pointer. * Per #2924, update Ensemble-Stat to set the VxPairBase::fcst_info pointer * Per #2924 udpate handling of fcst_info and obs_info pointers in Ensemble-Stat * Per #2924, update the GSI tools to handle the new fcst climo columns. * Per #2924, add backward compatibility logic so that when old climo column names are requested, the new ones are used. * Per #2924, print a DEBUG(2) log message if old column names are used. * Per #2924, switch the unit tests to reference the updated MPR column names rather than the old ones. * Per #2924, working progress. Not fully compiling yet * Per #2924, another round of changes. Removing MPR:FCST_CLIMO_CDF output column. This compiles but not sure if it actually runs yet * Per #2924, work in progress * Per #2924, work in progress. Almost compiling again. * Per #2924, get it compiling * Per #2924, add back in support for SCP and CDP which are interpreted as SOCP and OCDP, resp * Per #2924, update docs about SCP and CDP threshold types * Per #2924, minor whitespace changes * Per #2924, fix an uninitialized pointer bug by defining/calling SeepsClimoGrid::init_from_scratch() member function. The constructor had been calling clear() to delete pointers that weren't properly initialized to nullptr. Also, simplify some map processing logic. * Per #2924, rename SeepsAggScore from seeps to seeps_agg for clarity and to avoid conflicts in member function implementations. * Per #2924, fix seeps compilation error in Point-Stat * Per #2924, fix bug in the boolean logic for handling the do_climo_cdp NetCDF output option. * Per #2924, add missing exit statement. * Per #2924, tweak threshold.h * Per #2924, define one perc_thresh_info entry for each enumerated PercThreshType value * Per #2924, simplify the logic for handling percentile threshold types and print a log message once when the old versions are still used. * Per #2924, update the string comparison return value logic * Per #2924, fix the perc thresh string parsing logic by calling ConcatString::startswith() * Per #2924, switch all instances of CDP to OCDP. Gen-Ens-Prod was writing NetCDF files with OCDP in the output variable names, but Grid-Stat was requesting that the wrong variable name be read. So the unit tests failed. * Per #2924, add more doc details * Per #2924, update default config file to indicate when climo_mean and climo_stdev can be set seperately in the fcst and obs dictionaries. * Per #2924, update the MET tools to parse climo_mean and climo_stdev separately from the fcst and obs dictionaries. * Per #2924, backing out new/modified columns to minimize reg test diffs * Per #2924, one more section to be commented out later. * Per #2924, replace several calls to strncmp() with ConcatString::startswith() to simplify the code * Per #2924, strip out some more references to OBS_CLIMO_... in the unit tests. * Per #2924, delete accidental file * Per #2924 fix broken XML comments * Per #2924, fix comments * Per #2924, address SonarQube findings * Per #2924, tweak a Point-Stat and Grid-Stat unit test config file to make the output more comparable to develop. * Per #2924, fix bug in the logic of PairDataPoint and PairDataEnsemble, when looping over the 3-dim array do not return when checking the climo and fcst values. Instead we need to continue to the next loop iteration. * Per #2924, address more SonarQube code smells to reduce the overall number in MET for this PR. * Per #2924, correct the logic for parsing climo data from MPR lines. * Per #2924, update MPR and ORANK line types to update/add FCST/OBS_CLIMO_MEAN/STDEV/CDF columns. * Per #2924, cleanup grid_stat.cc source code by making calls to DataPlane::is_empty() and Grid::nxy(). * Per #2924, remove unneeded ==0 * Per #2924, working on PR2. * Per #2924, update User's Guide with notional example of specifying climo_mean and climo_stdev separately in the fcst and obs dicts. * Per #2924, adding a new unit test. It does NOT yet run as expected. Will debug on seneca * Per #2924, pass the description string to the read_climo_data_plane*() function to provide better log messages * Per #2924, more work on consistent log messages * Per #2924, tweak the configuration to define both field, climo_mean, and climo_stdev in both the fcst and obs dictionaries * Per #2924, tweak the unit_climatology_mixed.xml test * Per #2924, only whitespace changes. * Per #2924, missed swapping MET #2924 changes in 3 test files * Per #2924, delete accidentally committed file * Per #2924, delete accidentally committed files * Per #2924, add support for GRIB1 time range indicator value of 123 used for the corresponding METplus Use Case. Note that there are 22 other TRI values not currently supported. * Adds caveat regarding longitudes appearing in DEBUG statements with a… (#2947) * Adds caveat regarding longitudes appearing in DEBUG statements with a different sign to the FAQ. * Update appendixA.rst Missing paren * Create install_met_env.cactus * Adding special script for installing beta5 on wcoss2 * Modifying script, including updates to eckit and atlas * Corrected version of bufr being used * Feature #2938 pb2nc_center_time (#2954) * Per #2938, define CRC_Array::add_uniq(...) member function which is now used in PB2NC * Per #2938, replace n_elements() with n() to make the code more concise. Refine log/warning message when multiple message center times are encountered. * Feature #1371 series_analysis (#2951) * Per #1371, add -input command line argument and add support for ALL for the CTC, MCTC, SL1L2, and PCT line types. * Per #1371, rename the -input command line option as -aggregate instead * Per #1371, work in progress * Per #1371, just comments * Per #1371, working on aggregating CTC counts * Per #1371, work in progress * Per #1371, update timing info using time stamps in the aggr file * Per #1371, close the aggregate data file * Per #1371, define set_event() and set_nonevent() member functions * Per #1371, add logic to aggregate MCTC and PCT counts * Merging changes from develop * Per #1371, work in progress aggregating all the line statistics types. Still have several issues to address * Per #1371, switch to using get_stat() functions * Per #1371, work in progress. More consolidation * Per #1371, correct expected output file name * Per #1371, consistent regridding log messages and fix the Series-Analysis PairDataPoint object handling logic. * Per #1371, check the return status when opening the aggregate file. * Per #1371, fix prc/pjc typo * Per #1371, fix the series_analysis PCT aggregation logic and add a test to unit_series_analysis.xml to demonstrate. * Per #1371, resolve a few SonarQube findings * Per #1371, make use of range-based for loop, as recommeded by SonarQube * Per #1371, update series-analysis to apply the valid data threshold properly using the old aggregate data and the new pair data. * Per #1371, update series_analysis to buffer data and write it all at once instead of storing data value by value for each point. * Per #1371, add useful error message when required aggregation variables are not present in the input -aggr file. * Per #1371, print a Debug(2) message listing the aggregation fields being read. * Per #1371, correct operator+= logic in met_stats.cc for SL1L2Info, VL1L2Info, and NBRCNTInfo. The metadata settings, like fthresh and othresh, were not being passed to the output. * Per #1371, the DataPlane for the computed statistics should be initialized to a field of bad data values rather than the default value of 0. Otherwise, 0's are reported for stats a grid points with no data when they should really be reported as bad data! * Per #1371, update logic of the compute_cntinfo() function so that CNT statistics can be derived from a single SL1L2Info object containing both scalar and scalar anomaly partial sums. These changes enable CNT:ANOM_CORR to be aggregated in the Series-Analysis tool. * Per #1371, fix logic of climo log message. * Per #1371, this is actually related to MET #2924. In compute_pctinfo() used obs climo data first, if provided. And if not, use fcst climo data. * Per #1371, fix indexing bug (+i instead of +1) when check the valid data count. Also update the logic of read_aggr_total() to return a count of 0 for bad data. * Per #1371, add logic to aggregate the PSTD BRIERCL and BSS statistics in the do_climo_brier() function. Tested manually to confirm that it works. * Per #1371, switch to using string literals to satisfy SonarQube * Per #1371, update series_analysis tests in unit_climatology_1.0deg.xml to demonstrate aggregating climo-based stats. * Per #1371, remove extra comment * Per #1371, skip writing the PCT THRESH_i columns to the Series-Analysis output since they are not used * Per #1371, fix the R string literals to remove \t and \n escape sequences. * Per #1371, update the read_aggr_data_plane() suggestion strings. * Per #1371, ignore unneeded PCT 'THRESH_' variables both when reading and writing ALL PCT columns. * Per #1371, update the test named series_analysis_AGGR_CMD_LINE to include data for the F42 lead time that had previously been included for the same run in the develop branch. Note however that the timestamps in the output file for the develop branch (2012040900_to_2012041100) were wrong and have been corrected here (2012040900_to_2012041018) to match the actual data. * Per #1371, update the -aggr note to warn users about slow runtimes * Feature 2948 cxx17 (#2953) * Per #2948, updating versions of ecbuild, eckit, and atlas * Per #2948, Adding MET_CXX_STANDARD * Per #2948, updated wording for MET_CXX_STANDARD description * Per #2948, updating script to work with two versions of ecbuild, eckit, and atlas * Per #2948, without this change, there are compilation problems if the user wants to compile wihtout python * Per #2948, fixing logic for MET_CXX_STANDARD * Per #2928, adding missing end bracket * Per #2948, fixed the logic for compiling versions of ecbuild, eckit, and atlas * Per 948, fixed syntax for setting CXXFLAGS * Per #2948, adding new Makefile.in files and configure and changing METbaseimage 3.2 to 3.3. * Per #2948, updating version of met base tag from 3.2 to 3.3 * Per #2948, adding --enable-all MET_CXX_STANDARD=11 job * Update compilation_options.yml * Per #2948, added a job10 for MET_CXX_STANDARD=14 * Per #2948, added brief documentation for the MET_CXX_STANDARD option --------- Co-authored-by: Julie Prestopnik Co-authored-by: John Halley Gotway * Feature 1729 set attr grid (#2955) * #1729 Allow to change to differnt grid size if the raw size is 0 * Added build_grid_by_grid_string and build_grid_by_grid_string * #1729 Calls build_grid_by_grid_string * #1729 Added set_attr_grid at the -field option * #1729 Set obs_type to TYPE_NCCF if the file_type is given at the config file * #1729 Support set_sttr_grid and changed Error messages to Warning * #1729 FIxed SonmarQube findings * #1729 Initial release for unit test * #1729 Added update_missing_values * #1729 Deleted a shadowed local variable * #2673 Added more is_eq * #2673 Added get_exe_duration * 2673 Reducded nested statements * 2673 Fixed SonarGube findings * 2673 Fixed SonarQube findings * 2673 Fixed SonarQube findings * #1729 Added aan unittest plot_data_plane_set_attr_grid * #1729 Added aan unittest point2grid_cice_set_attr_grid * #1729 Added changed back the verbose level * #1729 Corrected typo --------- Co-authored-by: Howard Soh * Bugfix #2958 develop BAGSS SEDI CI (#2959) * Bugfix 2936 point2grid gfs (#2964) * #2936 Support 1D lat/lon values * #2936 Initial release * #2936 Cast the data type to avoid a compile warning * #2936 Added an unittest point2grid_gfs_1D_lat_lon --------- Co-authored-by: Howard Soh * Bugfix 2968 point2grid set attr grid (#2969) * #2968 Corrected set_attr_grid for point2grid_cice_set_attr_grid * #2968 Compare the DataPlane size and the variable data size * #2968 nx and ny are not ignored with set_attr_grid * #2968 Compare the DataPlane size and the variable data size --------- Co-authored-by: Howard Soh * Feature 2937 update unit (#2944) * added single quotes around env var/val pairs in export statements in cmd only mode * updated logic in unit() to check exec return value against expected return value; created TEST xml file to test this feature * deleted TEST_ xml, added test with retval 1 to unit_ascii2nc --------- Co-authored-by: Natalie Babij * Feature #2887 categorical weights PR 1 of 2 (#2967) * Per #2887, update NumArray::vals() to return a reference to the vector rather a pointer to doubles. * Per #2887, switch over the whole ContingencyTable class heirarchy from storing integer counts to storing double-precision weights. * Add ContingencyTable::is_integer() member function to check whether the table contains all integers * Per #2887, update parse_stat_line.cc to get it to compile after changing PCT to store thresholds in a std::vector. * Per #2887, update PCTInfo::clear() logic. * Per #2887, update ctc_by_row() logic to create reproducible results with the develop branch. * Per #2887, update logic of define_prob_bins() to add a final >=1.0 threshold if needed. While ==0.1 works fine, I found that ==0.05 did not because the last >=1.0 threshold was missing likely do to floating point precision issues. This change should fix that problem. * Per #2887, update roc_auc() function to match the develop branch * Per #2887, fix bug if computation of far() * Per #2887, replaced all ==0 integer equality checks with calls to is_eq() instead and fix a couple of equations to snuff out diffs in some CTS statistics. * Per #2887, address some of the 34 SonarQube code smells flagged for this PR. Note that the compute_ci.h/.cc changes are necessary and good since we should be computing CI's using doubles instead of integer counts. * Per #2887, update run_sonarqube.sh to specify the target CXX standard as 11. The hope is that that will limit the findings to only those features available in the C++11 standard. * Per #2887, update to SonarQube version 6.1.0.4477 released on 6/27/2024. * Per #2887, updating build_met_sonarqube.sh to specify --std=c++11 since c++17 is used by default * Hotfix to develop to fix a bug introduced for MET #2887. Refine the define_prob_bins() utility function so that ==n probability thresholds result in the correct number of probability thresholds. We were adding an unncessary 10-th bin (from 1.07143 to 1.0) for the ==7 probability threshold type. * Fix typo in tc-pairs.rst * Update build_docker_and_trigger_metplus.yml The docs directory was moved up to the top-level of the repository but this workflow was not updated. Changing the ignore setting so that doc-only updates do not trigger the full METplus testing workflow. * Feature 2023 remove double quotes around keywords (#2974) * testing AREA and AUTO changes * Keywords B thru L * thru R * adding quotes back in for lower case items * S thru the end of the document * Removing double quotes around 3 key words * Per #2023, adding a label name for the Attributes section * Per #2023, adding an internal link for the MODE tool Attributes section. * Adding quotes around Valid basins entries * more double quote updates * more complex updates with Julie P help * removing double quotes * fixing typos * removing double quotes * unbolding SURFACE and putting it in double quotes * fixing grammar * grammar * fixing typo * fixing typo --------- Co-authored-by: Julie Prestopnik * Feature #2924 parse_config (#2963) * Per #2924, remove GenEnsProd config file comment about parsing desc separately from each obs.field entry because the obs dictionary does not exist in the GenEnsProd config file. * Per #2924, update list of needed config entry names * Per #2924, remove const from the parent() member function so that we can perform lookups for the parent. * Per #2924, update the signature for and logic of the utility functions that retrieve the climatology data. Rather than requiring all the climo_mean and climo_stdev dictionary entries to be defined at the same config file context level, parse each one individually. This enables the METplus wrappers to only partially override this dictionary and still rely on the default values provided in MET's default configuration files. * Per #2924, update all calls to the climatology utility functions based on the new function signature. Also update the tools to check the number of climo fields separately for the forecast and observation climos. * Per #2924, update the parsing logic for the climatology regrid dictionary. Use config.fcst.climo_mean.regrid first, config.fcst.regrid second, and config.climo_mean.regrid third. Notably, DO NOT use config.regrid. This is definitely the problem with having regrid specified at mutliple config file context levels. It makes the logic for which to use when very messy. * Per #2924, forgot to add an else to print an error * Per #2924, remove extraneous semicolon * Per #2924, move 'fcst.regrid' into 'fcst.climo_mean.regrid'. Defining the climatology regridding logic inside fcst is problematic because it applies to the forecast data as well and you end up with the verification grid being undefined. So the climo regridding logic must be defined in 'climo_mean.regrid' either within the 'fcst' and 'obs' dictionaries or at the top-level config context. * Per #2924, based on PR feedback from @georgemccabe, add the Upper_Left, Upper_Right, Lower_Right, and Lower_Left interpolation methods to the list of valid options for regridding, as already indicated in the MET User's Guide. * Per #2924, update the logic of parse_conf_regrid() to (hopefully) make it work the way @georgemccabe expects it to. It now uses pointers to both the primary and default dictionaries and parses each entry individually. * Per #2924, need to check for non-null pointer before using it * Per #2924, revise the climo_name dictionary lookup logic when parsing the regrid dictionary. * Per #2924, update logic for handling RegridInfo * Per #2924, remove the default regridding information from the 'Searching' log message to avoid confusion. --------- Co-authored-by: MET Tools Test Account * Feature #2924 parse_config PR 2 (#2975) * Per #2924, remove GenEnsProd config file comment about parsing desc separately from each obs.field entry because the obs dictionary does not exist in the GenEnsProd config file. * Per #2924, update list of needed config entry names * Per #2924, remove const from the parent() member function so that we can perform lookups for the parent. * Per #2924, update the signature for and logic of the utility functions that retrieve the climatology data. Rather than requiring all the climo_mean and climo_stdev dictionary entries to be defined at the same config file context level, parse each one individually. This enables the METplus wrappers to only partially override this dictionary and still rely on the default values provided in MET's default configuration files. * Per #2924, update all calls to the climatology utility functions based on the new function signature. Also update the tools to check the number of climo fields separately for the forecast and observation climos. * Per #2924, update the parsing logic for the climatology regrid dictionary. Use config.fcst.climo_mean.regrid first, config.fcst.regrid second, and config.climo_mean.regrid third. Notably, DO NOT use config.regrid. This is definitely the problem with having regrid specified at mutliple config file context levels. It makes the logic for which to use when very messy. * Per #2924, forgot to add an else to print an error * Per #2924, remove extraneous semicolon * Per #2924, move 'fcst.regrid' into 'fcst.climo_mean.regrid'. Defining the climatology regridding logic inside fcst is problematic because it applies to the forecast data as well and you end up with the verification grid being undefined. So the climo regridding logic must be defined in 'climo_mean.regrid' either within the 'fcst' and 'obs' dictionaries or at the top-level config context. * Per #2924, based on PR feedback from @georgemccabe, add the Upper_Left, Upper_Right, Lower_Right, and Lower_Left interpolation methods to the list of valid options for regridding, as already indicated in the MET User's Guide. * Per #2924, update the logic of parse_conf_regrid() to (hopefully) make it work the way @georgemccabe expects it to. It now uses pointers to both the primary and default dictionaries and parses each entry individually. * Per #2924, need to check for non-null pointer before using it * Per #2924, revise the climo_name dictionary lookup logic when parsing the regrid dictionary. * Per #2924, update logic for handling RegridInfo * Per #2924, remove the default regridding information from the 'Searching' log message to avoid confusion. * Per #2924, escape sequences, like \n, cannot be used inside R-string literals. * Per #2924, update the logic of check_climo_n_vx() * Per #2924, revise logic in read_climo_data_plane_array(). Check the number of climo fields provided. If there's 0, just return since no data has been requested. If there's 1, use it regardless of the number of input fields. If there's more than 1, just use the requested i_vx index value. * Per #2924, update Series-Analysis to set both i_fcst and i_obs when looping over the series entries. * Per #2924, no real change. Just whitespace. * Unrelated to #2924, superficial changes to formatting of method_name strings for consistency. * Per #2924, add a new series_analysis test that ERRORS OUT prior to this PR but works after the changes in this PR. --------- Co-authored-by: MET Tools Test Account --------- Co-authored-by: Howard Soh Co-authored-by: John Halley Gotway Co-authored-by: Howard Soh Co-authored-by: MET Tools Test Account Co-authored-by: davidalbo Co-authored-by: j-opatz Co-authored-by: Daniel Adriaansen Co-authored-by: Julie Prestopnik Co-authored-by: George McCabe <23407799+georgemccabe@users.noreply.github.com> Co-authored-by: natalieb-noaa <146213121+natalieb-noaa@users.noreply.github.com> Co-authored-by: Natalie babij Co-authored-by: Natalie babij Co-authored-by: Natalie babij Co-authored-by: Natalie Babij Co-authored-by: Julie Prestopnik Co-authored-by: lisagoodrich <33230218+lisagoodrich@users.noreply.github.com> Co-authored-by: metplus-bot <97135045+metplus-bot@users.noreply.github.com> --- .../build_docker_and_trigger_metplus.yml | 2 +- data/config/GenEnsProdConfig_default | 1 - docs/Users_Guide/appendixA.rst | 2 +- docs/Users_Guide/appendixF.rst | 2 +- docs/Users_Guide/config_options.rst | 192 ++++++++-------- docs/Users_Guide/config_options_tc.rst | 28 +-- docs/Users_Guide/ensemble-stat.rst | 2 +- docs/Users_Guide/masking.rst | 2 +- docs/Users_Guide/mode.rst | 1 + docs/Users_Guide/overview.rst | 2 +- docs/Users_Guide/point-stat.rst | 2 +- docs/Users_Guide/reformat_point.rst | 2 +- docs/Users_Guide/stat-analysis.rst | 2 +- docs/Users_Guide/tc-pairs.rst | 2 +- internal/test_unit/config/GenEnsProdConfig | 1 - .../GenEnsProdConfig_climo_anom_ens_member_id | 1 - .../config/GenEnsProdConfig_normalize | 1 - .../config/GenEnsProdConfig_single_file_grib | 1 - .../config/GenEnsProdConfig_single_file_nc | 1 - ...nfig_climo_FCST_NCEP_1.0DEG_OBS_WMO_1.5DEG | 32 ++- .../config/SeriesAnalysisConfig_const_climo | 162 +++++++++++++ .../test_unit/xml/unit_climatology_1.0deg.xml | 28 +++ scripts/config/GenEnsProdConfig | 1 - src/basic/vx_cal/is_leap_year.cc | 4 +- src/basic/vx_config/config_constants.h | 13 +- src/basic/vx_config/config_util.cc | 214 +++++++++++++----- src/basic/vx_config/config_util.h | 63 ++++-- src/basic/vx_config/dictionary.h | 4 +- src/basic/vx_config/threshold.cc | 2 +- src/libcode/vx_data2d/var_info.cc | 54 +++-- src/libcode/vx_data2d/var_info.h | 4 +- src/libcode/vx_data2d_nc_cf/nc_cf_file.cc | 2 +- src/libcode/vx_regrid/vx_regrid.cc | 4 + src/libcode/vx_statistics/apply_mask.cc | 3 +- src/libcode/vx_statistics/read_climo.cc | 126 +++++++---- src/libcode/vx_statistics/read_climo.h | 16 +- src/tools/core/ensemble_stat/ensemble_stat.cc | 30 ++- .../ensemble_stat/ensemble_stat_conf_info.cc | 5 +- src/tools/core/grid_stat/grid_stat.cc | 33 ++- .../core/grid_stat/grid_stat_conf_info.cc | 8 +- src/tools/core/mode/mode_exec.cc | 12 +- src/tools/core/point_stat/point_stat.cc | 15 +- .../core/point_stat/point_stat_conf_info.cc | 3 +- .../core/series_analysis/series_analysis.cc | 31 ++- .../series_analysis_conf_info.cc | 5 +- src/tools/core/wavelet_stat/wavelet_stat.cc | 6 +- src/tools/other/gen_ens_prod/gen_ens_prod.cc | 13 +- src/tools/other/grid_diag/grid_diag.cc | 3 +- .../plot_point_obs_conf_info.cc | 3 +- src/tools/tc_utils/tc_diag/tc_diag.cc | 5 +- 50 files changed, 803 insertions(+), 348 deletions(-) create mode 100644 internal/test_unit/config/SeriesAnalysisConfig_const_climo diff --git a/.github/workflows/build_docker_and_trigger_metplus.yml b/.github/workflows/build_docker_and_trigger_metplus.yml index 7d7dcce29c..7d1ab738d8 100644 --- a/.github/workflows/build_docker_and_trigger_metplus.yml +++ b/.github/workflows/build_docker_and_trigger_metplus.yml @@ -5,7 +5,7 @@ on: branches: - develop paths-ignore: - - 'met/docs/**' + - 'docs/**' workflow_dispatch: diff --git a/data/config/GenEnsProdConfig_default b/data/config/GenEnsProdConfig_default index c650ec8b24..16a36f9833 100644 --- a/data/config/GenEnsProdConfig_default +++ b/data/config/GenEnsProdConfig_default @@ -13,7 +13,6 @@ model = "FCST"; // // Output description to be written -// May be set separately in each "obs.field" entry // desc = "NA"; diff --git a/docs/Users_Guide/appendixA.rst b/docs/Users_Guide/appendixA.rst index f39c96913a..384422af1f 100644 --- a/docs/Users_Guide/appendixA.rst +++ b/docs/Users_Guide/appendixA.rst @@ -515,7 +515,7 @@ Q. What is an example of using Grid-Stat with regridding and masking turned on? This tells Grid-Stat to do verification on the "observation" grid. Grid-Stat reads the GFS and Stage4 data and then automatically regrids the GFS data to the Stage4 domain using budget interpolation. - Use "FCST" to verify the forecast domain. And use either a named + Use FCST to verify the forecast domain. And use either a named grid or a grid specification string to regrid both the forecast and observation to a common grid. For example, to_grid = "G212"; will regrid both to NCEP Grid 212 before comparing them. diff --git a/docs/Users_Guide/appendixF.rst b/docs/Users_Guide/appendixF.rst index dc81f0cd96..bc051f5a14 100644 --- a/docs/Users_Guide/appendixF.rst +++ b/docs/Users_Guide/appendixF.rst @@ -368,7 +368,7 @@ The Ensemble-Stat, Series-Analysis, MTD and Gen-Ens-Prod tools all have the abil gen_ens_prod ens1.nc ens2.nc ens3.nc ens4.nc -out ens_prod.nc -config GenEnsProd_config -In this case, a user is passing 4 ensemble members to Gen-Ens-Prod to be evaluated, and each member is in a separate file. If a user wishes to use Python embedding to process the ensemble input files, then the same exact command is used however special modifications inside the GenEnsProd_config file are needed. In the config file dictionary, the user must set the **file_type** entry to either **PYTHON_NUMPY** or **PYTHON_XARRAY** to activate the Python embedding for these tools. Then, in the **name** entry of the config file dictionaries for the forecast or observation data, the user must list the **full path** to the Python script to be run. However, in the Python command, replace the name of the input gridded data file to the Python script with the constant string **MET_PYTHON_INPUT_ARG**. When looping over all of the input files, the MET tools will replace that constant **MET_PYTHON_INPUT_ARG** with the path to the input file currently being processed and optionally, any command line arguments for the Python script. Here is what this looks like in the GenEnsProd_config file for the above example: +In this case, a user is passing 4 ensemble members to Gen-Ens-Prod to be evaluated, and each member is in a separate file. If a user wishes to use Python embedding to process the ensemble input files, then the same exact command is used; however special modifications inside the GenEnsProd_config file are needed. In the config file dictionary, the user must set the **file_type** entry to either **PYTHON_NUMPY** or **PYTHON_XARRAY** to activate the Python embedding for these tools. Then, in the **name** entry of the config file dictionaries for the forecast or observation data, the user must list the **full path** to the Python script to be run. However, in the Python command, replace the name of the input gridded data file to the Python script with the constant string **MET_PYTHON_INPUT_ARG**. When looping over all of the input files, the MET tools will replace that constant **MET_PYTHON_INPUT_ARG** with the path to the input file currently being processed and optionally, any command line arguments for the Python script. Here is what this looks like in the GenEnsProd_config file for the above example: .. code-block:: :caption: Gen-Ens-Prod MET_PYTHON_INPUT_ARG Config diff --git a/docs/Users_Guide/config_options.rst b/docs/Users_Guide/config_options.rst index 3d892808e0..49885c3a9d 100644 --- a/docs/Users_Guide/config_options.rst +++ b/docs/Users_Guide/config_options.rst @@ -87,26 +87,26 @@ The configuration file language supports the following data types: * The following percentile threshold types are supported: - * "SFP" for a percentile of the sample forecast values. + * SFP for a percentile of the sample forecast values. e.g. ">SFP33.3" means greater than the 33.3-rd forecast percentile. - * "SOP" for a percentile of the sample observation values. + * SOP for a percentile of the sample observation values. e.g. ">SOP75" means greater than the 75-th observation percentile. - * "SFCP" for a percentile of the sample forecast climatology values. + * SFCP for a percentile of the sample forecast climatology values. e.g. ">SFCP90" means greater than the 90-th forecast climatology percentile. - * "SOCP" for a percentile of the sample observation climatology values. + * SOCP for a percentile of the sample observation climatology values. e.g. ">SOCP90" means greater than the 90-th observation climatology percentile. For backward compatibility, the "SCP" threshold type is processed the same as "SOCP". - * "USP" for a user-specified percentile threshold. + * USP for a user-specified percentile threshold. e.g. " 0.0. - * "FCDP" for forecast climatological distribution percentile thresholds. + * FCDP for forecast climatological distribution percentile thresholds. These thresholds require that the forecast climatological mean and standard deviation be defined using the "climo_mean" and "climo_stdev" config file options, respectively. The categorical (cat_thresh), @@ -125,7 +125,7 @@ The configuration file language supports the following data types: e.g. ">FCDP50" means greater than the 50-th percentile of the climatological distribution for each point. - * "OCDP" for observation climatological distribution percentile thresholds. + * OCDP for observation climatological distribution percentile thresholds. The "OCDP" threshold logic matches the "FCDP" logic described above. However these thresholds are defined using the observation climatological mean and standard deviation rather than the forecast climatological data. @@ -138,7 +138,7 @@ The configuration file language supports the following data types: in ensemble_stat), the following special logic is applied. Percentile thresholds of type equality are automatically converted to percentile bins which span the values from 0 to 100. - For example, "==OCDP25" is automatically expanded to 4 percentile bins: + For example, ==OCDP25 is automatically expanded to 4 percentile bins: >=OCDP0&&=OCDP25&&=OCDP50&&=OCDP75&&<=OCDP100 * When sample percentile thresholds of type SFP, SOP, SFCP, SOCP, or FBIAS @@ -160,13 +160,13 @@ The configuration file language supports the following data types: Prior to MET version 12.0.0, forecast climatological inputs were not supported. The observation climatological inputs were used to process - threshold types named "SCP" and "CDP". + threshold types named SCP and CDP. - For backward compatibility, the "SCP" threshold type is processed the same - as "SOCP" and "CDP" the same as "OCDP". + For backward compatibility, the SCP threshold type is processed the same + as SOCP and CDP the same as OCDP. - Users are encouraged to replace the deprecated "SCP" and "CDP" threshold - types with the updated "SOCP" and "OCDP" types, respectively. + Users are encouraged to replace the deprecated SCP and CDP threshold + types with the updated SOCP and OCDP types, respectively. * Piecewise-Linear Function (currently used only by MODE): @@ -351,14 +351,14 @@ values and/or define observation bias corrections. When processing point and gridded observations, Ensemble-Stat searches the table to find the entry defining the observation error information. The table consists of 15 columns and includes a header row defining each column. The -special string "ALL" is interpreted as a wildcard in these files. The first 6 +special string ALL is interpreted as a wildcard in these files. The first 6 columns (OBS_VAR, MESSAGE_TYPE, PB_REPORT_TYPE, IN_REPORT_TYPE, INSTRUMENT_TYPE, and STATION_ID) may be set to a comma-separated list of strings to be matched. In addition, the strings in the OBS_VAR column are interpreted as regular expressions when searching for a match. For example, setting the OBS_VAR column to 'APCP_[0-9]+' would match observations for both APCP_03 and APCP_24. The -HGT_RANGE, VAL_RANGE, and PRS_RANGE columns should either be set to "ALL" or -"BEG,END" where BEG and END specify the range of values to be used. The +HGT_RANGE, VAL_RANGE, and PRS_RANGE columns should either be set to ALL or +BEG,END where BEG and END specify the range of values to be used. The INST_BIAS_SCALE and INST_BIAS_OFFSET columns define instrument bias adjustments which are applied to the observation values. The DIST_TYPE and DIST_PARM columns define the distribution from which random perturbations should be drawn @@ -366,7 +366,7 @@ and applied to the ensemble member values. See the obs_error description below for details on the supported error distributions. The last two columns, MIN and MAX, define the bounds for the valid range of the bias-corrected observation values and randomly perturbed ensemble member values. Values less than MIN are -reset to the mimimum value and values greater than MAX are reset to the maximum +reset to the minimum value and values greater than MAX are reset to the maximum value. A value of NA indicates that the variable is unbounded. MET_GRIB_TABLES @@ -384,7 +384,7 @@ At runtime, the MET tools read default GRIB tables from the installed *share/met/table_files* directory, and their file formats are described below: GRIB1 table files begin with "grib1" prefix and end with a ".txt" suffix. -The first line of the file must contain "GRIB1". +The first line of the file must contain GRIB1. The following lines consist of 4 integers followed by 3 strings: | Column 1: GRIB code (e.g. 11 for temperature) @@ -404,7 +404,7 @@ References: | GRIB2 table files begin with "grib2" prefix and end with a ".txt" suffix. -The first line of the file must contain "GRIB2". +The first line of the file must contain GRIB2. The following lines consist of 8 integers followed by 3 strings. | Column 1: Section 0 Discipline @@ -824,7 +824,7 @@ using the following entries: - width = 4; To regrid using a 4x4 box or circle with diameter 4. * The "shape" entry defines the shape of the neighborhood. - Valid values are "SQUARE" or "CIRCLE" + Valid values are SQUARE or CIRCLE * The "gaussian_dx" entry specifies a delta distance for Gaussian smoothing. The default is 81.271. Ignored if not Gaussian method. @@ -1037,9 +1037,9 @@ to be verified. This dictionary may include the following entries: thresholds to specify which matched pairs should be included in the statistics. These options apply to the Point-Stat and Grid-Stat tools. They are parsed seperately for each "obs.field" array entry. - The "mpr_column" strings specify MPR column names ("FCST", "OBS", - "CLIMO_MEAN", "CLIMO_STDEV", or "CLIMO_CDF"), differences of columns - ("FCST-OBS"), or the absolute value of those differences ("ABS(FCST-OBS)"). + The "mpr_column" strings specify MPR column names (FCST, OBS, + CLIMO_MEAN, CLIMO_STDEV, or CLIMO_CDF), differences of columns + (FCST-OBS), or the absolute value of those differences (ABS(FCST-OBS)). The number of "mpr_thresh" thresholds must match the number of "mpr_column" entries, and the n-th threshold is applied to the n-th column. Any matched pairs which do not meet any of the specified thresholds are excluded from @@ -1170,64 +1170,64 @@ File-format specific settings for the "field" entry: extended PDS for ensembles. Set to "hi_res_ctl", "low_res_ctl", "+n", or "-n", for the n-th ensemble member. - * The "GRIB1_ptv" entry is an integer specifying the GRIB1 parameter + * The GRIB1_ptv entry is an integer specifying the GRIB1 parameter table version number. - * The "GRIB1_code" entry is an integer specifying the GRIB1 code (wgrib + * The GRIB1_code entry is an integer specifying the GRIB1 code (wgrib kpds5 value). - * The "GRIB1_center" is an integer specifying the originating center. + * The GRIB1_center is an integer specifying the originating center. - * The "GRIB1_subcenter" is an integer specifying the originating + * The GRIB1_subcenter is an integer specifying the originating subcenter. - * The "GRIB1_tri" is an integer specifying the time range indicator. + * The GRIB1_tri is an integer specifying the time range indicator. - * The "GRIB2_mtab" is an integer specifying the master table number. + * The GRIB2_mtab is an integer specifying the master table number. - * The "GRIB2_ltab" is an integer specifying the local table number. + * The GRIB2_ltab is an integer specifying the local table number. - * The "GRIB2_disc" is an integer specifying the GRIB2 discipline code. + * The GRIB2_disc is an integer specifying the GRIB2 discipline code. - * The "GRIB2_parm_cat" is an integer specifying the parameter category + * The GRIB2_parm_cat is an integer specifying the parameter category code. - * The "GRIB2_parm" is an integer specifying the parameter code. + * The GRIB2_parm is an integer specifying the parameter code. - * The "GRIB2_pdt" is an integer specifying the product definition + * The GRIB2_pdt is an integer specifying the product definition template (Table 4.0). - * The "GRIB2_process" is an integer specifying the generating process + * The GRIB2_process is an integer specifying the generating process (Table 4.3). - * The "GRIB2_cntr" is an integer specifying the originating center. + * The GRIB2_cntr is an integer specifying the originating center. - * The "GRIB2_ens_type" is an integer specifying the ensemble type + * The GRIB2_ens_type is an integer specifying the ensemble type (Table 4.6). - * The "GRIB2_der_type" is an integer specifying the derived product + * The GRIB2_der_type is an integer specifying the derived product type (Table 4.7). - * The "GRIB2_stat_type" is an integer specifying the statistical + * The GRIB2_stat_type is an integer specifying the statistical processing type (Table 4.10). - * The "GRIB2_perc_val" is an integer specifying the requested percentile + * The GRIB2_perc_val is an integer specifying the requested percentile value (0 to 100) to be used. This applies only to GRIB2 product definition templates 4.6 and 4.10. - * The "GRIB2_aerosol_type" is an integer specifying the aerosol type - (Table 4.233). This applies only to GRIB2 product defintion templates + * The GRIB2_aerosol_type is an integer specifying the aerosol type + (Table 4.233). This applies only to GRIB2 product definition templates 4.46 and 4.48. - * The "GRIB2_aerosol_interval_type" is an integer specifying the aerosol - size interval (Table 4.91). This applies only to GRIB2 product defintion + * The GRIB2_aerosol_interval_type is an integer specifying the aerosol + size interval (Table 4.91). This applies only to GRIB2 product definition templates 4.46 and 4.48. - * The "GRIB2_aerosol_size_lower" and "GRIB2_aerosol_size_upper" are doubles + * The GRIB2_aerosol_size_lower and "GRIB2_aerosol_size_upper" are doubles specifying the endpoints of the aerosol size interval. These applies only to GRIB2 product defintion templates 4.46 and 4.48. - * The "GRIB2_ipdtmpl_index" and "GRIB2_ipdtmpl_val" entries are arrays + * The GRIB2_ipdtmpl_index and GRIB2_ipdtmpl_val entries are arrays of integers which specify the product description template values to be used. The indices are 0-based. For example, use the following to request a GRIB2 record whose 9-th and 27-th product description @@ -1722,13 +1722,13 @@ mask_missing_flag The "mask_missing_flag" entry specifies how missing data should be handled in the Wavelet-Stat and MODE tools: - * "NONE" to perform no masking of missing data + * NONE to perform no masking of missing data - * "FCST" to mask the forecast field with missing observation data + * FCST to mask the forecast field with missing observation data - * "OBS" to mask the observation field with missing forecast data + * OBS to mask the observation field with missing forecast data - * "BOTH" to mask both fields with missing data from the other + * BOTH to mask both fields with missing data from the other .. code-block:: none @@ -1930,10 +1930,10 @@ should be used for computing bootstrap confidence intervals: * The "interval" entry specifies the confidence interval method: - * "BCA" for the BCa (bias-corrected percentile) interval method is + * BCA for the BCa (bias-corrected percentile) interval method is highly accurate but computationally intensive. - * "PCTILE" uses the percentile method which is somewhat less accurate + * PCTILE uses the percentile method which is somewhat less accurate but more efficient. * The "rep_prop" entry specifies a proportion between 0 and 1 to define @@ -1995,11 +1995,11 @@ This dictionary may include the following entries: should be applied. This does not apply when doing point verification with the Point-Stat or Ensemble-Stat tools: - * "FCST" to interpolate/smooth the forecast field. + * FCST to interpolate/smooth the forecast field. - * "OBS" to interpolate/smooth the observation field. + * OBS to interpolate/smooth the observation field. - * "BOTH" to interpolate/smooth both the forecast and the observation. + * BOTH to interpolate/smooth both the forecast and the observation. * The "vld_thresh" entry specifies a number between 0 and 1. When performing interpolation over some neighborhood of points the ratio of @@ -2186,7 +2186,7 @@ This dictionary may include the following entries: output line and used for computing probabilistic statistics. * The "shape" entry defines the shape of the neighborhood. - Valid values are "SQUARE" or "CIRCLE" + Valid values are SQUARE or CIRCLE * The "prob_cat_thresh" entry defines the thresholds which define ensemble probabilities from which to compute the ranked probability score output. @@ -2212,11 +2212,11 @@ The "output_flag" entry is a dictionary that specifies what verification methods should be applied to the input data. Options exist for each output line type from the MET tools. Each line type may be set to one of: -* "NONE" to skip the corresponding verification method +* NONE to skip the corresponding verification method -* "STAT" to write the verification output only to the ".stat" output file +* STAT to write the verification output only to the ".stat" output file -* "BOTH" to write to the ".stat" output file as well the optional +* BOTH to write to the ".stat" output file as well the optional "_type.txt" file, a more readable ASCII file sorted by line type. .. code-block:: none @@ -2353,12 +2353,12 @@ Lat/Lon grids. It is only applied for grid-to-grid verification in Grid-Stat and Ensemble-Stat and is not applied for grid-to-point verification. Three grid weighting options are currently supported: -* "NONE" to disable grid weighting using a constant weight (default). +* NONE to disable grid weighting using a constant weight (default). -* "COS_LAT" to define the weight as the cosine of the grid point latitude. +* COS_LAT to define the weight as the cosine of the grid point latitude. This an approximation for grid box area used by NCEP and WMO. -* "AREA" to define the weight as the true area of the grid box (km^2). +* AREA to define the weight as the true area of the grid box (km^2). The weights are ultimately computed as the weight at each grid point divided by the sum of the weights for the current masking region. @@ -2403,9 +2403,9 @@ duplicate_flag The "duplicate_flag" entry specifies how to handle duplicate point observations in Point-Stat and Ensemble-Stat: -* "NONE" to use all point observations (legacy behavior) +* NONE to use all point observations (legacy behavior) -* "UNIQUE" only use a single observation if two or more observations +* UNIQUE only use a single observation if two or more observations match. Matching observations are determined if they contain identical latitude, longitude, level, elevation, and time information. They may contain different observation values or station IDs @@ -2427,23 +2427,23 @@ observations that appear at a single location (lat,lon,level,elev) in Point-Stat and Ensemble-Stat. Eight techniques are currently supported: -* "NONE" to use all point observations (legacy behavior) +* NONE to use all point observations (legacy behavior) -* "NEAREST" use only the observation that has the valid +* NEAREST use only the observation that has the valid time closest to the forecast valid time -* "MIN" use only the observation that has the lowest value +* MIN use only the observation that has the lowest value -* "MAX" use only the observation that has the highest value +* MAX use only the observation that has the highest value -* "UW_MEAN" compute an unweighted mean of the observations +* UW_MEAN compute an unweighted mean of the observations -* "DW_MEAN" compute a weighted mean of the observations based +* DW_MEAN compute a weighted mean of the observations based on the time of the observation -* "MEDIAN" use the median observation +* MEDIAN use the median observation -* "PERC" use the Nth percentile observation where N = obs_perc_value +* PERC use the Nth percentile observation where N = obs_perc_value The reporting mechanism for this feature can be activated by specifying a verbosity level of three or higher. The report will show information @@ -3204,7 +3204,7 @@ Floating-point max/min options: Setting limits on various floating-point attributes. One may specify these as integers (i.e., without a decimal point), if desired. The following pairs of options indicate minimum and maximum values for each MODE attribute that can be described as a floating- -point number. Please refer to "The MODE Tool" section on attributes in the +point number. Please refer to :ref:`mode-attributes` in the MET User's Guide for a description of these attributes. .. code-block:: none @@ -3371,14 +3371,14 @@ The object definition settings for MODE are contained within the "fcst" and * The "merge_flag" entry specifies the merging methods to be applied: - * "NONE" for no merging + * NONE for no merging - * "THRESH" for the double-threshold merging method. Merge objects + * THRESH for the double-threshold merging method. Merge objects that would be part of the same object at the lower threshold. - * "ENGINE" for the fuzzy logic approach comparing the field to itself + * ENGINE for the fuzzy logic approach comparing the field to itself - * "BOTH" for both the double-threshold and engine merging methods + * BOTH for both the double-threshold and engine merging methods .. code-block:: none @@ -3417,15 +3417,15 @@ match_flag The "match_flag" entry specifies the matching method to be applied: -* "NONE" for no matching between forecast and observation objects +* NONE for no matching between forecast and observation objects -* "MERGE_BOTH" for matching allowing additional merging in both fields. +* MERGE_BOTH for matching allowing additional merging in both fields. If two objects in one field match the same object in the other field, those two objects are merged. -* "MERGE_FCST" for matching allowing only additional forecast merging +* MERGE_FCST for matching allowing only additional forecast merging -* "NO_MERGE" for matching with no additional merging in either field +* NO_MERGE for matching with no additional merging in either field .. code-block:: none @@ -3665,9 +3665,9 @@ In the PB2NC tool, the "message_type" entry is an array of message types to be retained. An empty list indicates that all should be retained. | List of valid message types: -| ADPUPA AIRCAR AIRCFT ADPSFC ERS1DA GOESND GPSIPW -| MSONET PROFLR QKSWND RASSDA SATEMP SATWND SFCBOG -| SFCSHP SPSSMI SYNDAT VADWND +| “ADPUPA”, “AIRCAR”, “AIRCFT”, “ADPSFC”, “ERS1DA”, “GOESND”, “GPSIPW”, +| “MSONET”, “PROFLR”, “QKSWND”, “RASSDA”, “SATEMP”, +| “SATWND”, “SFCBOG”, “SFCSHP”, “SPSSMI”, “SYNDAT”, “VADWND” For example: @@ -3885,7 +3885,7 @@ See `Code table for observation quality markers `) and using the empirical ensemble distribution (:ref:`Hersbach, 2000 `). The CRPS statistic using the empirical ensemble distribution can be adjusted (bias corrected) by subtracting 1/(2*m) times the mean absolute difference of the ensemble members, where m is the ensemble size. This is reported as a separate statistic called CRPS_EMP_FAIR. The empirical CRPS and its fair version are included in the Ensemble Continuous Statistics (ECNT) line type, along with other statistics quantifying the ensemble spread and ensemble mean skill. -The Ensemble-Stat tool can derive ensemble relative frequencies and verify them as probability forecasts all in the same run. Note however that these simple ensemble relative frequencies are not actually calibrated probability forecasts. If probabilistic line types are requested (output_flag), this logic is applied to each pair of fields listed in the forecast (fcst) and observation (obs) dictionaries of the configuration file. Each probability category threshold (prob_cat_thresh) listed for the forecast field is applied to the input ensemble members to derive a relative frequency forecast. The probability category threshold (prob_cat_thresh) parsed from the corresponding observation entry is applied to the (gridded or point) observations to determine whether or not the event actually occurred. The paired ensemble relative freqencies and observation events are used to populate an Nx2 probabilistic contingency table. The dimension of that table is determined by the probability PCT threshold (prob_pct_thresh) configuration file option parsed from the forecast dictionary. All probabilistic output types requested are derived from the this Nx2 table and written to the ascii output files. Note that the FCST_VAR name header column is automatically reset as "PROB({FCST_VAR}{THRESH})" where {FCST_VAR} is the current field being evaluated and {THRESH} is the threshold that was applied. +The Ensemble-Stat tool can derive ensemble relative frequencies and verify them as probability forecasts all in the same run. Note however that these simple ensemble relative frequencies are not actually calibrated probability forecasts. If probabilistic line types are requested (output_flag), this logic is applied to each pair of fields listed in the forecast (fcst) and observation (obs) dictionaries of the configuration file. Each probability category threshold (prob_cat_thresh) listed for the forecast field is applied to the input ensemble members to derive a relative frequency forecast. The probability category threshold (prob_cat_thresh) parsed from the corresponding observation entry is applied to the (gridded or point) observations to determine whether or not the event actually occurred. The paired ensemble relative frequencies and observation events are used to populate an Nx2 probabilistic contingency table. The dimension of that table is determined by the probability PCT threshold (prob_pct_thresh) configuration file option parsed from the forecast dictionary. All probabilistic output types requested are derived from this Nx2 table and written to the ascii output files. Note that the FCST_VAR name header column is automatically reset as "PROB({FCST_VAR}{THRESH})" where {FCST_VAR} is the current field being evaluated and {THRESH} is the threshold that was applied. Note that if no probability category thresholds (prob_cat_thresh) are defined, but climatological mean and standard deviation data is provided along with climatological bins, climatological distribution percentile thresholds are automatically derived and used to compute probabilistic outputs. diff --git a/docs/Users_Guide/masking.rst b/docs/Users_Guide/masking.rst index 0d705ac06e..5dd8fe72d8 100644 --- a/docs/Users_Guide/masking.rst +++ b/docs/Users_Guide/masking.rst @@ -178,4 +178,4 @@ In this example, the Gen-Vx-Mask tool will read the ASCII Lat/Lon file named **C Feature-Relative Methods ======================== -This section contains a description of several methods that may be used to perform feature-relative (or event -based) evaluation. The methodology pertains to examining the environment surrounding a particular feature or event such as a tropical, extra-tropical cyclone, convective cell, snow-band, etc. Several approaches are available for these types of investigations including applying masking described above (e.g. circle or box) or using the "FORCE" interpolation method in the regrid configuration option (see :numref:`config_options`). These methods generally require additional scripting, including potentially storm-track identification, outside of MET to be paired with the features of the MET tools. METplus may be used to execute this type of analysis. Please refer to the `METplus User's Guide `_. +This section contains a description of several methods that may be used to perform feature-relative (or event -based) evaluation. The methodology pertains to examining the environment surrounding a particular feature or event such as a tropical, extra-tropical cyclone, convective cell, snow-band, etc. Several approaches are available for these types of investigations including applying masking described above (e.g. circle or box) or using the FORCE interpolation method in the regrid configuration option (see :numref:`config_options`). These methods generally require additional scripting, including potentially storm-track identification, outside of MET to be paired with the features of the MET tools. METplus may be used to execute this type of analysis. Please refer to the `METplus User's Guide `_. diff --git a/docs/Users_Guide/mode.rst b/docs/Users_Guide/mode.rst index bb59cfee3e..2dc4bc3e96 100644 --- a/docs/Users_Guide/mode.rst +++ b/docs/Users_Guide/mode.rst @@ -57,6 +57,7 @@ An example of the steps involved in resolving objects is shown in :numref:`mode- Example of an application of the MODE object identification process to a model precipitation field. +.. _mode-attributes: Attributes ---------- diff --git a/docs/Users_Guide/overview.rst b/docs/Users_Guide/overview.rst index 37cf16b404..1e0d362bb4 100644 --- a/docs/Users_Guide/overview.rst +++ b/docs/Users_Guide/overview.rst @@ -62,7 +62,7 @@ The Grid-Diag tool produces multivariate probability density functions (PDFs) th The Wavelet-Stat tool decomposes two-dimensional forecasts and observations according to the Intensity-Scale verification technique described by :ref:`Casati et al. (2004) `. There are many types of spatial verification approaches and the Intensity-Scale technique belongs to the scale-decomposition (or scale-separation) verification approaches. The spatial scale components are obtained by applying a wavelet transformation to the forecast and observation fields. The resulting scale-decomposition measures error, bias and skill of the forecast on each spatial scale. Information is provided on the scale dependency of the error and skill, on the no-skill to skill transition scale, and on the ability of the forecast to reproduce the observed scale structure. The Wavelet-Stat tool is primarily used for precipitation fields. However, the tool can be applied to other variables, such as cloud fraction. -Results from the statistical analysis stage are output in ASCII, NetCDF and Postscript formats. The Point-Stat, Grid-Stat, Wavelet-Stat, and Ensemble-Stat tools create STAT (statistics) files which are tabular ASCII files ending with a ".stat" suffix. The STAT output files consist of multiple line types, each containing a different set of related statistics. The columns preceeding the LINE_TYPE column are common to all lines. However, the number and contents of the remaining columns vary by line type. +Results from the statistical analysis stage are output in ASCII, NetCDF and Postscript formats. The Point-Stat, Grid-Stat, Wavelet-Stat, and Ensemble-Stat tools create STAT (statistics) files which are tabular ASCII files ending with a ".stat" suffix. The STAT output files consist of multiple line types, each containing a different set of related statistics. The columns preceding the LINE_TYPE column are common to all lines. However, the number and contents of the remaining columns vary by line type. The Stat-Analysis and MODE-Analysis tools aggregate the output statistics from the previous steps across multiple cases. The Stat-Analysis tool reads the STAT output of Point-Stat, Grid-Stat, Ensemble-Stat, and Wavelet-Stat and can be used to filter the STAT data and produce aggregated continuous and categorical statistics. Stat-Analysis also reads matched pair data (i.e. MPR line type) via python embedding. The MODE-Analysis tool reads the ASCII output of the MODE tool and can be used to produce summary information about object location, size, and intensity (as well as other object characteristics) across one or more cases. diff --git a/docs/Users_Guide/point-stat.rst b/docs/Users_Guide/point-stat.rst index 70e3847b79..d6de2d32b1 100644 --- a/docs/Users_Guide/point-stat.rst +++ b/docs/Users_Guide/point-stat.rst @@ -23,7 +23,7 @@ Interpolation/Matching Methods This section provides information about the various methods available in MET to match gridded model output to point observations. Matching in the vertical and horizontal are completed separately using different methods. -In the vertical, if forecasts and observations are at the same vertical level, then they are paired as-is. If any discrepancy exists between the vertical levels, then the forecasts are interpolated to the level of the observation. The vertical interpolation is done in the natural log of pressure coordinates, except for specific humidity, which is interpolated using the natural log of specific humidity in the natural log of pressure coordinates. Vertical interpolation for heights above ground are done linear in height coordinates. When forecasts are for the surface, no interpolation is done. They are matched to observations with message types that are mapped to **SURFACE** in the **message_type_group_map** configuration option. By default, the surface message types include ADPSFC, SFCSHP, and MSONET. The regular expression is applied to the message type list at the message_type_group_map. The derived message types from the time summary ("ADPSFC_MIN_hhmmss" and "ADPSFC_MAX_hhmmss") are accepted as "ADPSFC". +In the vertical, if forecasts and observations are at the same vertical level, then they are paired as-is. If any discrepancy exists between the vertical levels, then the forecasts are interpolated to the level of the observation. The vertical interpolation is done in the natural log of pressure coordinates, except for specific humidity, which is interpolated using the natural log of specific humidity in the natural log of pressure coordinates. Vertical interpolation for heights above ground are done linear in height coordinates. When forecasts are for the surface, no interpolation is done. They are matched to observations with message types that are mapped to "SURFACE" in the **message_type_group_map** configuration option. By default, the surface message types include ADPSFC, SFCSHP, and MSONET. The regular expression is applied to the message type list at the message_type_group_map. The derived message types from the time summary ("ADPSFC_MIN_hhmmss" and "ADPSFC_MAX_hhmmss") are accepted as "ADPSFC". To match forecasts and observations in the horizontal plane, the user can select from a number of methods described below. Many of these methods require the user to define the width of the forecast grid W, around each observation point P, that should be considered. In addition, the user can select the interpolation shape, either a SQUARE or a CIRCLE. For example, a square of width 2 defines the 2 x 2 set of grid points enclosing P, or simply the 4 grid points closest to P. A square of width of 3 defines a 3 x 3 square consisting of 9 grid points centered on the grid point closest to P. :numref:`point_stat_fig1` provides illustration. The point P denotes the observation location where the interpolated value is calculated. The interpolation width W, shown is five. diff --git a/docs/Users_Guide/reformat_point.rst b/docs/Users_Guide/reformat_point.rst index fefe71eef9..d9ad8695c1 100644 --- a/docs/Users_Guide/reformat_point.rst +++ b/docs/Users_Guide/reformat_point.rst @@ -454,7 +454,7 @@ While initial versions of the ASCII2NC tool only supported a simple 11 column AS • `AirNow DailyData_v2, AirNow HourlyData, and AirNow HourlyAQObs formats `_. See the :ref:`MET_AIRNOW_STATIONS` environment variable. -• `National Data Buoy (NDBC) Standard Meteorlogical Data format `_. See the :ref:`MET_NDBC_STATIONS` environment variable. +• `National Data Buoy (NDBC) Standard Meteorological Data format `_. See the :ref:`MET_NDBC_STATIONS` environment variable. • `International Soil Moisture Network (ISMN) Data format `_. diff --git a/docs/Users_Guide/stat-analysis.rst b/docs/Users_Guide/stat-analysis.rst index 92672edc26..0b87586d09 100644 --- a/docs/Users_Guide/stat-analysis.rst +++ b/docs/Users_Guide/stat-analysis.rst @@ -324,7 +324,7 @@ The configuration file for the Stat-Analysis tool is optional. Users may find it Most of the user-specified parameters listed in the Stat-Analysis configuration file are used to filter the ASCII statistical output from the MET statistics tools down to a desired subset of lines over which statistics are to be computed. Only output that meets all of the parameters specified in the Stat-Analysis configuration file will be retained. -The Stat-Analysis tool actually performs a two step process when reading input data. First, it stores the filtering information defined top section of the configuration file. It applies that filtering criteria when reading the input STAT data and writes the filtered data out to a temporary file, as described in :numref:`Contributor's Guide Section %s `. Second, each job defined in the **jobs** entry reads data from that temporary file and performs the task defined for the job. After all jobs have run, the Stat-Analysis tool deletes the temporary file. +The Stat-Analysis tool actually performs a two step process when reading input data. First, it stores the filtering information in the defined top section of the configuration file. It applies that filtering criteria when reading the input STAT data and writes the filtered data out to a temporary file, as described in :numref:`Contributor's Guide Section %s `. Second, each job defined in the **jobs** entry reads data from that temporary file and performs the task defined for the job. After all jobs have run, the Stat-Analysis tool deletes the temporary file. This two step process enables the Stat-Analysis tool to run more efficiently when many jobs are defined in the configuration file. If only operating on a small subset of the input data, the common filtering criteria can be applied once rather than re-applying it for each job. In general, filtering criteria common to all tasks defined in the **jobs** entry should be moved to the top section of the configuration file. diff --git a/docs/Users_Guide/tc-pairs.rst b/docs/Users_Guide/tc-pairs.rst index c7a56a05ff..cc1c7dc2cd 100644 --- a/docs/Users_Guide/tc-pairs.rst +++ b/docs/Users_Guide/tc-pairs.rst @@ -211,7 +211,7 @@ The **consensus** array allows users to derive consensus forecasts from any numb - The **members** field is a comma-separated array of model ID stings which define the members of the consensus. - The **required** field is a comma-separated array of true/false values associated with each consensus member. If a member is designated as true, that member must be present in order for the consensus to be generated. If a member is false, the consensus will be generated regardless of whether or not the member is present. The required array can either be empty or have the same length as the members array. If empty, it defaults to all false. - The **min_req** field is the number of members required in order for the consensus to be computed. The **required** and **min_req** field options are applied at each forecast lead time. If any member of the consensus has a non-valid position or intensity value, the consensus for that valid time will not be generated. -- Tropical cyclone diagnostics, if provided on the command line, are included in the computation of consensus tracks. The consensus diagnostics are computed as the mean of the diagnostics for the members. The **diag_required** and **min_diag_req** entries apply the same logic described above, but to the computation of each consensus diagnostic value rather than the consensus track location and intensity. If **diag_required** is missing or an empty list, it defaults to all false. If **min_diag_req** is missing, it default to 0. +- Tropical cyclone diagnostics, if provided on the command line, are included in the computation of consensus tracks. The consensus diagnostics are computed as the mean of the diagnostics for the members. The **diag_required** and **min_diag_req** entries apply the same logic described above, but to the computation of each consensus diagnostic value rather than the consensus track location and intensity. If **diag_required** is missing or an empty list, it defaults to all false. If **min_diag_req** is missing, it defaults to 0. - The **write_members** field is a boolean that indicates whether or not to write track output for the individual consensus members. If set to true, standard output will show up for all members. If set to false, output for the consensus members is excluded from the output, even if they are used to define other consensus tracks in the configuration file. Users should take care to avoid filtering out track data for the consensus members with the **model** field, described above. Either set **model** to an empty list to process all input track data or include all of the consensus members in the **model** list. Use the **write_members** field, not the **model** field, to suppress track output for consensus members. diff --git a/internal/test_unit/config/GenEnsProdConfig b/internal/test_unit/config/GenEnsProdConfig index 813272dc14..9841006614 100644 --- a/internal/test_unit/config/GenEnsProdConfig +++ b/internal/test_unit/config/GenEnsProdConfig @@ -13,7 +13,6 @@ model = "FCST"; // // Output description to be written -// May be set separately in each "obs.field" entry // desc = "NA"; diff --git a/internal/test_unit/config/GenEnsProdConfig_climo_anom_ens_member_id b/internal/test_unit/config/GenEnsProdConfig_climo_anom_ens_member_id index adebdb2528..440b528326 100644 --- a/internal/test_unit/config/GenEnsProdConfig_climo_anom_ens_member_id +++ b/internal/test_unit/config/GenEnsProdConfig_climo_anom_ens_member_id @@ -13,7 +13,6 @@ model = "CFSv2"; // // Output description to be written -// May be set separately in each "obs.field" entry // desc = "NA"; diff --git a/internal/test_unit/config/GenEnsProdConfig_normalize b/internal/test_unit/config/GenEnsProdConfig_normalize index b23708ab46..192c75cb5b 100644 --- a/internal/test_unit/config/GenEnsProdConfig_normalize +++ b/internal/test_unit/config/GenEnsProdConfig_normalize @@ -13,7 +13,6 @@ model = "FCST"; // // Output description to be written -// May be set separately in each "obs.field" entry // desc = "NA"; diff --git a/internal/test_unit/config/GenEnsProdConfig_single_file_grib b/internal/test_unit/config/GenEnsProdConfig_single_file_grib index b1f2bb3315..82f31da619 100644 --- a/internal/test_unit/config/GenEnsProdConfig_single_file_grib +++ b/internal/test_unit/config/GenEnsProdConfig_single_file_grib @@ -13,7 +13,6 @@ model = "GEFS"; // // Output description to be written -// May be set separately in each "obs.field" entry // desc = "NA"; diff --git a/internal/test_unit/config/GenEnsProdConfig_single_file_nc b/internal/test_unit/config/GenEnsProdConfig_single_file_nc index 2b4be6e12b..9d84b2bcbc 100644 --- a/internal/test_unit/config/GenEnsProdConfig_single_file_nc +++ b/internal/test_unit/config/GenEnsProdConfig_single_file_nc @@ -13,7 +13,6 @@ model = "CFSv2"; // // Output description to be written -// May be set separately in each "obs.field" entry // desc = "NA"; diff --git a/internal/test_unit/config/GridStatConfig_climo_FCST_NCEP_1.0DEG_OBS_WMO_1.5DEG b/internal/test_unit/config/GridStatConfig_climo_FCST_NCEP_1.0DEG_OBS_WMO_1.5DEG index ab1cdd8362..8783cbd9e1 100644 --- a/internal/test_unit/config/GridStatConfig_climo_FCST_NCEP_1.0DEG_OBS_WMO_1.5DEG +++ b/internal/test_unit/config/GridStatConfig_climo_FCST_NCEP_1.0DEG_OBS_WMO_1.5DEG @@ -75,16 +75,24 @@ fcst = { climo_mean = { field = field_list; file_name = [ "${FCST_CLIMO_DIR}/cmean_1d.19590410" ]; + + regrid = { + method = BILIN; + width = 2; + vld_thresh = 0.5; + shape = SQUARE; + } + + time_interp_method = DW_MEAN; + day_interval = 1; + hour_interval = 6; }; + climo_stdev = climo_mean; climo_stdev = { - field = field_list; file_name = [ "${FCST_CLIMO_DIR}/cstdv_1d.19590410" ]; }; - time_interp_method = DW_MEAN; - day_interval = 1; - hour_interval = 6; } obs = { @@ -99,18 +107,24 @@ obs = { "${OBS_CLIMO_DIR}/u850hPa_mean.grib", "${OBS_CLIMO_DIR}/v500hPa_mean.grib", "${OBS_CLIMO_DIR}/v850hPa_mean.grib" ]; + regrid = { + method = BILIN; + width = 2; + vld_thresh = 0.5; + shape = SQUARE; + } + + time_interp_method = DW_MEAN; + day_interval = 1; + hour_interval = 12; }; + climo_stdev = climo_mean; climo_stdev = { - field = field_list; file_name = [ "${OBS_CLIMO_DIR}/t850hPa_stdev.grib", "${OBS_CLIMO_DIR}/u850hPa_stdev.grib", "${OBS_CLIMO_DIR}/v850hPa_stdev.grib" ]; }; - - time_interp_method = DW_MEAN; - day_interval = 1; - hour_interval = 12; } //////////////////////////////////////////////////////////////////////////////// diff --git a/internal/test_unit/config/SeriesAnalysisConfig_const_climo b/internal/test_unit/config/SeriesAnalysisConfig_const_climo new file mode 100644 index 0000000000..df991f208e --- /dev/null +++ b/internal/test_unit/config/SeriesAnalysisConfig_const_climo @@ -0,0 +1,162 @@ +//////////////////////////////////////////////////////////////////////////////// +// +// Series-Analysis configuration file. +// +// For additional information, please see the MET User's Guide. +// +//////////////////////////////////////////////////////////////////////////////// + +// +// Output model name to be written +// +model = "GFS"; + +// +// Output description to be written +// +desc = "NA"; + +// +// Output observation type to be written +// +obtype = "GFSANL"; + +//////////////////////////////////////////////////////////////////////////////// + +// +// Verification grid +// +regrid = { + to_grid = NONE; + method = NEAREST; + width = 1; + vld_thresh = 0.5; +} + +//////////////////////////////////////////////////////////////////////////////// + +censor_thresh = []; +censor_val = []; +cat_thresh = []; +cnt_thresh = [ NA ]; +cnt_logic = UNION; + +// +// Forecast and observation fields to be verified +// +fcst = { + field = [ + { name = "TMP"; level = "P850"; valid_time = "20120409_12"; }, + { name = "TMP"; level = "P850"; valid_time = "20120410_00"; }, + { name = "TMP"; level = "P850"; valid_time = "20120410_12"; } + ]; +} +obs = { + field = [ + { name = "TMP"; level = "P850"; valid_time = "20120409_12"; }, + { name = "TMP"; level = "P850"; valid_time = "20120410_00"; }, + { name = "TMP"; level = "P850"; valid_time = "20120410_12"; } + ]; +} + +//////////////////////////////////////////////////////////////////////////////// + +// +// Climatology data +// +climo_mean = fcst; +climo_mean = { + + file_name = [ ${CLIMO_MEAN_FILE_LIST} ]; + + field = [ + { name = "TMP"; level = "P850"; valid_time = "19590409_00"; } + ]; + + regrid = { + method = BILIN; + width = 2; + vld_thresh = 0.5; + } + + time_interp_method = NEAREST; + day_interval = NA; + hour_interval = NA; +} + +climo_stdev = climo_mean; +climo_stdev = { + file_name = [ ${CLIMO_STDEV_FILE_LIST} ]; +} + +climo_cdf = { + cdf_bins = 1; + center_bins = FALSE; + direct_prob = FALSE; +} + +//////////////////////////////////////////////////////////////////////////////// + +// +// Confidence interval settings +// +ci_alpha = [ 0.05 ]; + +boot = { + interval = PCTILE; + rep_prop = 1.0; + n_rep = 0; + rng = "mt19937"; + seed = "1"; +} + +//////////////////////////////////////////////////////////////////////////////// + +// +// Verification masking regions +// +mask = { + grid = ""; + poly = ""; +} + +// +// Number of grid points to be processed concurrently. Set smaller to use less +// memory but increase the number of passes through the data. If set <= 0, all +// grid points are processed concurrently. +// +block_size = 0; + +// +// Ratio of valid matched pairs to compute statistics for a grid point +// +vld_thresh = 0.5; + +//////////////////////////////////////////////////////////////////////////////// + +// +// Statistical output types +// +output_stats = { + fho = [ ]; + ctc = [ ]; + cts = [ ]; + mctc = [ ]; + mcts = [ ]; + cnt = [ "TOTAL", "RMSE", "ANOM_CORR" ]; + sl1l2 = [ ]; + sal1l2 = [ ]; + pct = [ ]; + pstd = [ ]; + pjc = [ ]; + prc = [ ]; +} + +//////////////////////////////////////////////////////////////////////////////// + +hss_ec_value = NA; +rank_corr_flag = FALSE; +tmp_dir = "/tmp"; +version = "V12.0.0"; + +//////////////////////////////////////////////////////////////////////////////// diff --git a/internal/test_unit/xml/unit_climatology_1.0deg.xml b/internal/test_unit/xml/unit_climatology_1.0deg.xml index fcd6b59668..699026825e 100644 --- a/internal/test_unit/xml/unit_climatology_1.0deg.xml +++ b/internal/test_unit/xml/unit_climatology_1.0deg.xml @@ -186,6 +186,34 @@ + + &MET_BIN;/series_analysis + + CLIMO_MEAN_FILE_LIST + "&DATA_DIR_CLIMO;/NCEP_NCAR_40YR_1.0deg/cmean_1d.19590409" + + + CLIMO_STDEV_FILE_LIST + "&DATA_DIR_CLIMO;/NCEP_NCAR_40YR_1.0deg/cstdv_1d.19590409" + + + + \ + -fcst &DATA_DIR_MODEL;/grib2/gfs/gfs_2012040900_F012.grib2 \ + &DATA_DIR_MODEL;/grib2/gfs/gfs_2012040900_F024.grib2 \ + &DATA_DIR_MODEL;/grib2/gfs/gfs_2012040900_F036.grib2 \ + -obs &DATA_DIR_MODEL;/grib2/gfsanl/gfsanl_4_20120409_1200_000.grb2 \ + &DATA_DIR_MODEL;/grib2/gfsanl/gfsanl_4_20120410_0000_000.grb2 \ + &DATA_DIR_MODEL;/grib2/gfsanl/gfsanl_4_20120410_1200_000.grb2 \ + -out &OUTPUT_DIR;/climatology_1.0deg/series_analysis_GFS_CLIMO_1.0DEG_CONST_CLIMO.nc \ + -config &CONFIG_DIR;/SeriesAnalysisConfig_const_climo \ + -v 3 + + + &OUTPUT_DIR;/climatology_1.0deg/series_analysis_GFS_CLIMO_1.0DEG_CONST_CLIMO.nc + + + &MET_BIN;/series_analysis diff --git a/scripts/config/GenEnsProdConfig b/scripts/config/GenEnsProdConfig index 74350a328d..65d13aadbd 100644 --- a/scripts/config/GenEnsProdConfig +++ b/scripts/config/GenEnsProdConfig @@ -13,7 +13,6 @@ model = "FCST"; // // Output description to be written -// May be set separately in each "obs.field" entry // desc = "NA"; diff --git a/src/basic/vx_cal/is_leap_year.cc b/src/basic/vx_cal/is_leap_year.cc index d37854d690..a383041475 100644 --- a/src/basic/vx_cal/is_leap_year.cc +++ b/src/basic/vx_cal/is_leap_year.cc @@ -102,7 +102,7 @@ void adjuste_day_for_month_year_units(int &day, int &month, int &year, double mo // Compute remaining days from the month fraction bool day_adjusted = false; const int day_offset = (int)(month_fraction * DAYS_PER_MONTH + 0.5); - const char *method_name = "adjuste_day() --> "; + const char *method_name = "adjuste_day_for_month_year_units() -> "; day += day_offset; if (day == 1 && abs(month_fraction-0.5) < DAY_EPSILON) { @@ -162,7 +162,7 @@ unixtime add_to_unixtime(unixtime base_unixtime, int sec_per_unit, unixtime ut; auto time_value_ut = (unixtime)time_value; double time_fraction = time_value - (double)time_value_ut; - const char *method_name = "add_to_unixtime() -->"; + const char *method_name = "add_to_unixtime() -> "; if (sec_per_unit == SEC_MONTH || sec_per_unit == SEC_YEAR) { if (time_value < 0) { diff --git a/src/basic/vx_config/config_constants.h b/src/basic/vx_config/config_constants.h index e1a18aeb1a..7bba9e759e 100644 --- a/src/basic/vx_config/config_constants.h +++ b/src/basic/vx_config/config_constants.h @@ -297,7 +297,7 @@ struct InterpInfo { void clear(); void validate(); // Ensure that width and method are accordant bool operator==(const InterpInfo &) const; - InterpInfo &operator=(const InterpInfo &a) noexcept; // SoanrQube findings + InterpInfo &operator=(const InterpInfo &a) noexcept; // SonarQube findings }; //////////////////////////////////////////////////////////////////////// @@ -329,6 +329,7 @@ struct RegridInfo { void validate(); // ensure that width and method are accordant void validate_point(); // ensure that width and method are accordant RegridInfo &operator=(const RegridInfo &a) noexcept; // SoanrQube findings + ConcatString get_str() const; }; //////////////////////////////////////////////////////////////////////// @@ -725,12 +726,10 @@ static const char conf_key_is_prob[] = "is_prob"; // // Climatology data parameter key names // -static const char conf_key_climo_mean_field[] = "climo_mean.field"; -static const char conf_key_fcst_climo_mean_field[] = "fcst.climo_mean.field"; -static const char conf_key_obs_climo_mean_field[] = "obs.climo_mean.field"; -static const char conf_key_climo_stdev_field[] = "climo_stdev.field"; -static const char conf_key_fcst_climo_stdev_field[] = "fcst.climo_stdev.field"; -static const char conf_key_obs_climo_stdev_field[] = "obs.climo_stdev.field"; +static const char conf_key_climo_mean[] = "climo_mean"; +static const char conf_key_climo_mean_field[] = "climo_mean.field"; +static const char conf_key_climo_stdev[] = "climo_stdev"; +static const char conf_key_climo_stdev_field[] = "climo_stdev.field"; // // Climatology distribution parameter key names diff --git a/src/basic/vx_config/config_util.cc b/src/basic/vx_config/config_util.cc index 344f997bea..5cce67dfcb 100644 --- a/src/basic/vx_config/config_util.cc +++ b/src/basic/vx_config/config_util.cc @@ -14,6 +14,7 @@ #include "config_util.h" #include "enum_as_int.hpp" +#include "configobjecttype_to_string.h" #include "vx_math.h" #include "vx_util.h" @@ -265,6 +266,13 @@ RegridInfo &RegridInfo::operator=(const RegridInfo &a) noexcept { return *this; } +/////////////////////////////////////////////////////////////////////////////// + +ConcatString RegridInfo::get_str() const { + ConcatString cs(interpmthd_to_string(method)); + cs << "(" << width << ")"; + return cs; +} /////////////////////////////////////////////////////////////////////////////// @@ -1331,13 +1339,10 @@ BootInfo parse_conf_boot(Dictionary *dict) { return info; } - /////////////////////////////////////////////////////////////////////////////// -RegridInfo parse_conf_regrid(Dictionary *dict, bool error_out) { - Dictionary *regrid_dict = (Dictionary *) nullptr; +RegridInfo parse_conf_regrid(Dictionary *dict, RegridInfo *default_info, bool error_out) { RegridInfo info; - int v; if(!dict) { mlog << Error << "\nparse_conf_regrid() -> " @@ -1346,10 +1351,10 @@ RegridInfo parse_conf_regrid(Dictionary *dict, bool error_out) { } // Conf: regrid - regrid_dict = dict->lookup_dictionary(conf_key_regrid, false); + Dictionary *regrid_dict = dict->lookup_dictionary(conf_key_regrid, false); // Check that the regrid dictionary is present - if(!regrid_dict) { + if(!regrid_dict && !default_info) { if(error_out) { mlog << Error << "\nparse_conf_regrid() -> " << "can't find the \"regrid\" dictionary!\n\n"; @@ -1360,61 +1365,164 @@ RegridInfo parse_conf_regrid(Dictionary *dict, bool error_out) { } } - // Parse to_grid as an integer - v = regrid_dict->lookup_int(conf_key_to_grid, false, false); + // Conf: to_grid (optional) as an integer or string + const DictionaryEntry * entry = nullptr; + + if(regrid_dict) entry = regrid_dict->lookup(conf_key_to_grid, false); + + // to_grid found + if(entry) { - // If integer lookup successful, convert to FieldType. - if(regrid_dict->last_lookup_status()) { - info.field = int_to_fieldtype(v); - info.enable = (info.field == FieldType::Fcst || - info.field == FieldType::Obs); + // Convert integer to FieldType + if(entry->type() == IntegerType) { + info.field = int_to_fieldtype(entry->i_value()); + info.enable = (info.field == FieldType::Fcst || + info.field == FieldType::Obs); + } + // Store grid name string + else if(entry->type() == StringType) { + info.name = entry->string_value(); + info.enable = true; + } + else { + mlog << Error << "\nparse_conf_regrid() -> " + << "Unexpected type (" + << configobjecttype_to_string(entry->type()) + << ") for \"" << conf_key_to_grid + << "\" configuration entry.\n\n"; + exit(1); + } + } + // Use default RegridInfo + else if(default_info){ + info.name = default_info->name; + info.enable = default_info->enable; } - // If integer lookup unsuccessful, parse vx_grid as a string. - // Do not error out since to_grid isn't specified for climo.regrid. + // Use global default else { - info.name = regrid_dict->lookup_string(conf_key_to_grid, false); + info.name = ""; info.enable = true; } - // Conf: vld_thresh - double thr = regrid_dict->lookup_double(conf_key_vld_thresh, false); - info.vld_thresh = (is_bad_data(thr) ? default_vld_thresh : thr); + // Conf: vld_thresh (required) + if(regrid_dict && regrid_dict->lookup(conf_key_vld_thresh, false)) { + info.vld_thresh = regrid_dict->lookup_double(conf_key_vld_thresh); + } + // Use default RegridInfo + else if(default_info) { + info.vld_thresh = default_info->vld_thresh; + } + // Use global default + else { + info.vld_thresh = default_vld_thresh; + } + + // Conf: method (required) + if(regrid_dict && regrid_dict->lookup(conf_key_method, false)) { + info.method = int_to_interpmthd(regrid_dict->lookup_int(conf_key_method)); + } + // Use default RegridInfo + else if(default_info) { + info.method = default_info->method; + } - // Parse the method and width - info.method = int_to_interpmthd(regrid_dict->lookup_int(conf_key_method)); - info.width = regrid_dict->lookup_int(conf_key_width); + // Conf: width (required) + if(regrid_dict && regrid_dict->lookup(conf_key_width, false)) { + info.width = regrid_dict->lookup_int(conf_key_width); + } + // Use default RegridInfo + else if(default_info) { + info.width = default_info->width; + } - // Conf: shape - v = regrid_dict->lookup_int(conf_key_shape, false); - if (regrid_dict->last_lookup_status()) { - info.shape = int_to_gridtemplate(v); + // Conf: shape (optional) + if(regrid_dict && regrid_dict->lookup(conf_key_shape, false)) { + info.shape = int_to_gridtemplate(regrid_dict->lookup_int(conf_key_shape)); + } + // Use default RegridInfo + else if(default_info) { + info.shape = default_info->shape; } + // Use global default else { - // If not specified, use the default square shape info.shape = GridTemplateFactory::GridTemplates::Square; } - // Conf: gaussian dx and radius - double conf_value = regrid_dict->lookup_double(conf_key_gaussian_dx, false); - info.gaussian.dx = (is_bad_data(conf_value) ? default_gaussian_dx : conf_value); - conf_value = regrid_dict->lookup_double(conf_key_gaussian_radius, false); - info.gaussian.radius = (is_bad_data(conf_value) ? default_gaussian_radius : conf_value); - conf_value = regrid_dict->lookup_double(conf_key_trunc_factor, false); - info.gaussian.trunc_factor = (is_bad_data(conf_value) ? default_trunc_factor : conf_value); - if (info.method == InterpMthd::Gaussian || info.method == InterpMthd::MaxGauss) info.gaussian.compute(); + // Conf: gaussian_dx (optional) + if(regrid_dict && regrid_dict->lookup(conf_key_gaussian_dx, false)) { + info.gaussian.dx = regrid_dict->lookup_double(conf_key_gaussian_dx); + } + // Use default RegridInfo + else if(default_info) { + info.gaussian.dx = default_info->gaussian.dx; + } + // Use global default + else { + info.gaussian.dx = default_gaussian_dx; + } + + // Conf: gaussian_radius (optional) + if(regrid_dict && regrid_dict->lookup(conf_key_gaussian_radius, false)) { + info.gaussian.radius = regrid_dict->lookup_double(conf_key_gaussian_radius); + } + // Use default RegridInfo + else if(default_info) { + info.gaussian.radius = default_info->gaussian.radius; + } + // Use global default + else { + info.gaussian.radius = default_gaussian_radius; + } + + // Conf: gaussian_trunc_factor (optional) + if(regrid_dict && regrid_dict->lookup(conf_key_trunc_factor, false)) { + info.gaussian.trunc_factor = regrid_dict->lookup_double(conf_key_trunc_factor); + } + // Use default RegridInfo + else if(default_info) { + info.gaussian.trunc_factor = default_info->gaussian.trunc_factor; + } + // Use global default + else { + info.gaussian.trunc_factor = default_trunc_factor; + } + + // Compute Guassian parameters + if(info.method == InterpMthd::Gaussian || + info.method == InterpMthd::MaxGauss) { + info.gaussian.compute(); + } // MET#2437 Do not search the higher levels of config file context for convert, // censor_thresh, and censor_val. They must be specified within the // regrid dictionary itself. - // Conf: convert - info.convert_fx.set(regrid_dict->lookup(conf_key_convert, false)); + // Conf: convert (optional) + if(regrid_dict && regrid_dict->lookup(conf_key_convert, false)) { + info.convert_fx.set(regrid_dict->lookup(conf_key_convert)); + } + // Use default RegridInfo + else if(default_info) { + info.convert_fx = default_info->convert_fx; + } - // Conf: censor_thresh - info.censor_thresh = regrid_dict->lookup_thresh_array(conf_key_censor_thresh, false, true, false); + // Conf: censor_thresh (optional) + if(regrid_dict && regrid_dict->lookup(conf_key_censor_thresh, false)) { + info.censor_thresh = regrid_dict->lookup_thresh_array(conf_key_censor_thresh); + } + // Use default RegridInfo + else if(default_info) { + info.censor_thresh = default_info->censor_thresh; + } - // Conf: censor_val - info.censor_val = regrid_dict->lookup_num_array(conf_key_censor_val, false, true, false); + // Conf: censor_val (optional) + if(regrid_dict && regrid_dict->lookup(conf_key_censor_val, false)) { + info.censor_val = regrid_dict->lookup_num_array(conf_key_censor_val); + } + // Use default RegridInfo + else if(default_info) { + info.censor_val = default_info->censor_val; + } // Validate the settings info.validate(); @@ -2514,28 +2622,28 @@ void check_mask_names(const StringArray &sa) { /////////////////////////////////////////////////////////////////////////////// -void check_climo_n_vx(Dictionary *dict, const int n_vx) { - int n; +void check_climo_n_vx(Dictionary *dict, const int n_input) { + int n_climo; // Check for a valid number of climatology mean fields - n = parse_conf_n_vx(dict->lookup_array(conf_key_climo_mean_field, false)); - if(n != 0 && n != n_vx) { + n_climo = parse_conf_n_vx(dict->lookup_array(conf_key_climo_mean_field, false)); + if(n_climo != 0 && n_climo != 1 && n_climo != n_input) { mlog << Error << "\ncheck_climo_n_vx() -> " << "The number of climatology mean fields in \"" - << conf_key_climo_mean_field - << "\" must be zero or match the number (" << n_vx - << ") in \"" << conf_key_fcst_field << "\".\n\n"; + << conf_key_climo_mean_field << "\" (" << n_climo + << ") must be 0, 1, or match the number of input fields (" + << n_input << ").\n\n"; exit(1); } // Check for a valid number of climatology standard deviation fields - n = parse_conf_n_vx(dict->lookup_array(conf_key_climo_stdev_field, false)); - if(n != 0 && n != n_vx) { + n_climo = parse_conf_n_vx(dict->lookup_array(conf_key_climo_stdev_field, false)); + if(n_climo != 0 && n_climo != 1 && n_climo != n_input) { mlog << Error << "\ncheck_climo_n_vx() -> " << "The number of climatology standard deviation fields in \"" - << conf_key_climo_stdev_field - << "\" must be zero or match the number (" - << n_vx << ") in \"" << conf_key_fcst_field << "\".\n\n"; + << conf_key_climo_stdev_field << "\" (" << n_climo + << ") must be 0, 1, or match the number of input fields (" + << n_input << ").\n\n"; exit(1); } diff --git a/src/basic/vx_config/config_util.h b/src/basic/vx_config/config_util.h index 3dae869b2b..15de1e00f0 100644 --- a/src/basic/vx_config/config_util.h +++ b/src/basic/vx_config/config_util.h @@ -31,18 +31,34 @@ static const char conf_key_old_prepbufr_map[] = "obs_prefbufr_map"; // for ba //////////////////////////////////////////////////////////////////////// extern ConcatString parse_conf_version(Dictionary *dict); -extern ConcatString parse_conf_string(Dictionary *dict, const char *, bool check_empty = true); +extern ConcatString parse_conf_string( + Dictionary *dict, + const char *, + bool check_empty=true); extern GrdFileType parse_conf_file_type(Dictionary *dict); extern std::map - parse_conf_output_flag(Dictionary *dict, const STATLineType *, int); + parse_conf_output_flag( + Dictionary *dict, + const STATLineType *, int); extern std::map parse_conf_output_stats(Dictionary *dict); extern int parse_conf_n_vx(Dictionary *dict); -extern Dictionary parse_conf_i_vx_dict(Dictionary *dict, int index); -extern StringArray parse_conf_tc_model(Dictionary *dict, bool error_out = default_dictionary_error_out); -extern StringArray parse_conf_message_type(Dictionary *dict, bool error_out = default_dictionary_error_out); -extern StringArray parse_conf_sid_list(Dictionary *dict, const char *); -extern void parse_sid_mask(const ConcatString &, StringArray &, ConcatString &); +extern Dictionary parse_conf_i_vx_dict( + Dictionary *dict, + int index); +extern StringArray parse_conf_tc_model( + Dictionary *dict, + bool error_out=default_dictionary_error_out); +extern StringArray parse_conf_message_type( + Dictionary *dict, + bool error_out=default_dictionary_error_out); +extern StringArray parse_conf_sid_list( + Dictionary *dict, + const char *); +extern void parse_sid_mask( + const ConcatString &, + StringArray &, + ConcatString &); extern std::vector parse_conf_llpnt_mask(Dictionary *dict); extern StringArray parse_conf_obs_qty_inc(Dictionary *dict); @@ -51,27 +67,40 @@ extern NumArray parse_conf_ci_alpha(Dictionary *dict); extern NumArray parse_conf_eclv_points(Dictionary *dict); extern ClimoCDFInfo parse_conf_climo_cdf(Dictionary *dict); extern TimeSummaryInfo parse_conf_time_summary(Dictionary *dict); -extern std::map parse_conf_key_value_map( - Dictionary *dict, const char *conf_key_map_name, const char *caller=nullptr); +extern std::map + parse_conf_key_value_map( + Dictionary *dict, + const char *conf_key_map_name, + const char *caller=nullptr); extern void parse_add_conf_key_value_map( - Dictionary *dict, const char *conf_key_map_name, std::map *m); + Dictionary *dict, + const char *conf_key_map_name, + std::map *m); extern void parse_add_conf_key_values_map( - Dictionary *dict, const char *conf_key_map_name, - std::map *m, const char *caller=nullptr); + Dictionary *dict, + const char *conf_key_map_name, + std::map *m, + const char *caller=nullptr); extern std::map parse_conf_message_type_map(Dictionary *dict); extern std::map parse_conf_message_type_group_map(Dictionary *dict); -extern std::map parse_conf_metadata_map(Dictionary *dict); +extern std::map + parse_conf_metadata_map(Dictionary *dict); extern std::map parse_conf_obs_name_map(Dictionary *dict); extern std::map parse_conf_obs_to_qc_map(Dictionary *dict); extern std::map parse_conf_key_convert_map( - Dictionary *dict, const char *conf_key_map_name, const char *caller=nullptr); + Dictionary *dict, + const char *conf_key_map_name, + const char *caller=nullptr); extern BootInfo parse_conf_boot(Dictionary *dict); -extern RegridInfo parse_conf_regrid(Dictionary *dict, bool error_out = default_dictionary_error_out); +extern RegridInfo parse_conf_regrid( + Dictionary *dict, + RegridInfo *default_info=nullptr, + bool error_out=default_dictionary_error_out); extern InterpInfo parse_conf_interp(Dictionary *dict, const char *); extern NbrhdInfo parse_conf_nbrhd(Dictionary *dict, const char *); extern HiRAInfo parse_conf_hira(Dictionary *dict); @@ -92,7 +121,9 @@ extern ConcatString parse_conf_ugrid_coordinates_file(Dictionary *dict); extern ConcatString parse_conf_ugrid_dataset(Dictionary *dict); extern ConcatString parse_conf_ugrid_map_config(Dictionary *dict); extern double parse_conf_ugrid_max_distance_km(Dictionary *dict); -extern void parse_add_conf_ugrid_metadata_map(Dictionary *dict, std::map *m); +extern void parse_add_conf_ugrid_metadata_map( + Dictionary *dict, + std::map *m); extern void check_mask_names(const StringArray &); diff --git a/src/basic/vx_config/dictionary.h b/src/basic/vx_config/dictionary.h index 97faa4e3dc..bcb4a7f34b 100644 --- a/src/basic/vx_config/dictionary.h +++ b/src/basic/vx_config/dictionary.h @@ -243,7 +243,7 @@ class Dictionary { virtual const DictionaryEntry * operator[](int) const; - virtual const Dictionary * parent() const; + virtual Dictionary * parent() const; virtual bool is_array() const; @@ -346,7 +346,7 @@ class Dictionary { inline int Dictionary::n_entries() const { return Nentries; } -inline const Dictionary * Dictionary::parent() const { return Parent; } +inline Dictionary * Dictionary::parent() const { return Parent; } inline void Dictionary::set_is_array(bool __tf) { IsArray = __tf; return; } diff --git a/src/basic/vx_config/threshold.cc b/src/basic/vx_config/threshold.cc index cbf0a3cb7d..ef650ef2c0 100644 --- a/src/basic/vx_config/threshold.cc +++ b/src/basic/vx_config/threshold.cc @@ -114,7 +114,7 @@ if ( !match && mlog << Debug(2) << R"(Please replace the deprecated "SCP" and "CDP" )" << R"(threshold types with "SOCP" and "OCDP", respectively, in the ")" - << str << R"(" threshold string.\n)"; + << str << R"(" threshold string.)" << "\n"; print_climo_perc_thresh_log_message = false; diff --git a/src/libcode/vx_data2d/var_info.cc b/src/libcode/vx_data2d/var_info.cc index 2c92c6bf69..ac12513e76 100644 --- a/src/libcode/vx_data2d/var_info.cc +++ b/src/libcode/vx_data2d/var_info.cc @@ -116,6 +116,7 @@ void VarInfo::assign(const VarInfo &v) { nBins = v.nBins; Range = v.Range; + DefaultRegrid = v.DefaultRegrid; Regrid = v.Regrid; SetAttrName = v.SetAttrName; @@ -176,6 +177,7 @@ void VarInfo::clear() { nBins = 0; Range.clear(); + DefaultRegrid.clear(); Regrid.clear(); SetAttrName.clear(); @@ -215,26 +217,29 @@ void VarInfo::dump(ostream &out) const { // Dump out the contents out << "VarInfo::dump():\n" - << " MagicStr = " << MagicStr.contents() << "\n" - << " ReqName = " << ReqName.contents() << "\n" - << " Name = " << Name.contents() << "\n" - << " LongName = " << LongName.contents() << "\n" - << " Units = " << Units.contents() << "\n" - << " PFlag = " << PFlag << "\n" - << " PName = " << PName.contents() << "\n" - << " PUnits = " << PUnits.contents() << "\n" - << " PAsScalar = " << PAsScalar << "\n" - << " UVIndex = " << UVIndex << "\n" - << " Init = " << init_str << " (" << Init << ")\n" - << " Valid = " << valid_str << " (" << Valid << ")\n" - << " Ensemble = " << Ensemble.contents() << "\n" - << " Lead = " << lead_str << " (" << Lead << ")\n" - << " ConvertFx = " << (ConvertFx.is_set() ? "IsSet" : "(nul)") << "\n" - << " CensorThresh = " << CensorThresh.get_str() << "\n" - << " CensorVal = " << CensorVal.serialize() << "\n" - << " nBins = " << nBins << "\n" - << " Range = " << Range.serialize() << "\n" - << " Regrid = " << interpmthd_to_string(Regrid.method) << "\n"; + << " MagicStr = " << MagicStr.contents() << "\n" + << " ReqName = " << ReqName.contents() << "\n" + << " Name = " << Name.contents() << "\n" + << " LongName = " << LongName.contents() << "\n" + << " Units = " << Units.contents() << "\n" + << " PFlag = " << PFlag << "\n" + << " PName = " << PName.contents() << "\n" + << " PUnits = " << PUnits.contents() << "\n" + << " PAsScalar = " << PAsScalar << "\n" + << " UVIndex = " << UVIndex << "\n" + << " Init = " << init_str << " (" << Init << ")\n" + << " Valid = " << valid_str << " (" << Valid << ")\n" + << " Ensemble = " << Ensemble.contents() << "\n" + << " Lead = " << lead_str << " (" << Lead << ")\n" + << " ConvertFx = " << (ConvertFx.is_set() ? "IsSet" : "(nul)") << "\n" + << " CensorThresh = " << CensorThresh.get_str() << "\n" + << " CensorVal = " << CensorVal.serialize() << "\n" + << " nBins = " << nBins << "\n" + << " Range = " << Range.serialize() << "\n" + << " DefaultRegrid = " << interpmthd_to_string(DefaultRegrid.method) + << "(" << DefaultRegrid.width << ")\n" + << " Regrid = " << interpmthd_to_string(Regrid.method) + << "(" << Regrid.width << ")\n"; Level.dump(out); @@ -425,6 +430,13 @@ void VarInfo::set_range(const NumArray &a) { /////////////////////////////////////////////////////////////////////////////// +void VarInfo::set_default_regrid(const RegridInfo &ri) { + DefaultRegrid = ri; + return; +} + +/////////////////////////////////////////////////////////////////////////////// + void VarInfo::set_regrid(const RegridInfo &ri) { Regrid = ri; return; @@ -528,7 +540,7 @@ void VarInfo::set_dict(Dictionary &dict) { if(dict.last_lookup_status()) set_range(na); // Parse regrid, if present - Regrid = parse_conf_regrid(&dict, false); + Regrid = parse_conf_regrid(&dict, &DefaultRegrid, false); // Parse set_attr strings SetAttrName = diff --git a/src/libcode/vx_data2d/var_info.h b/src/libcode/vx_data2d/var_info.h index 3271376816..eba7551b67 100644 --- a/src/libcode/vx_data2d/var_info.h +++ b/src/libcode/vx_data2d/var_info.h @@ -57,7 +57,8 @@ class VarInfo int nBins; // Number of pdf bins NumArray Range; // Range of pdf bins - RegridInfo Regrid; // Regridding logic + RegridInfo DefaultRegrid; // Default regridding logic + RegridInfo Regrid; // Regridding logic // Options to override metadata ConcatString SetAttrName; @@ -189,6 +190,7 @@ class VarInfo void set_n_bins(const int &); void set_range(const NumArray &); + void set_default_regrid(const RegridInfo &); void set_regrid(const RegridInfo &); void set_level_info_grib(Dictionary & dict); diff --git a/src/libcode/vx_data2d_nc_cf/nc_cf_file.cc b/src/libcode/vx_data2d_nc_cf/nc_cf_file.cc index e985870169..41c8106f31 100644 --- a/src/libcode/vx_data2d_nc_cf/nc_cf_file.cc +++ b/src/libcode/vx_data2d_nc_cf/nc_cf_file.cc @@ -2299,7 +2299,7 @@ void NcCfFile::get_grid_mapping_polar_stereographic(const NcVar *grid_mapping_va { double x_coord_to_m_cf = 1.0; double y_coord_to_m_cf = 1.0; - static const string method_name = "NcCfFile::get_grid_mapping_polar_stereographic() --> "; + static const string method_name = "NcCfFile::get_grid_mapping_polar_stereographic() -> "; // Get projection attributes // proj_origin_lat: either 90.0 or -90.0, to decide the northern/southern hemisphere diff --git a/src/libcode/vx_regrid/vx_regrid.cc b/src/libcode/vx_regrid/vx_regrid.cc index 5fcc970601..3914c83004 100644 --- a/src/libcode/vx_regrid/vx_regrid.cc +++ b/src/libcode/vx_regrid/vx_regrid.cc @@ -40,6 +40,10 @@ switch ( info.method ) { case InterpMthd::LS_Fit: case InterpMthd::Bilin: case InterpMthd::Nearest: + case InterpMthd::Upper_Left: + case InterpMthd::Upper_Right: + case InterpMthd::Lower_Right: + case InterpMthd::Lower_Left: out = met_regrid_generic (in, from_grid, to_grid, info); break; diff --git a/src/libcode/vx_statistics/apply_mask.cc b/src/libcode/vx_statistics/apply_mask.cc index bd12b1a25b..01c696243c 100644 --- a/src/libcode/vx_statistics/apply_mask.cc +++ b/src/libcode/vx_statistics/apply_mask.cc @@ -633,7 +633,8 @@ DataPlane parse_geog_data(Dictionary *dict, const Grid &vx_grid, regrid_info = parse_conf_regrid(dict); mlog << Debug(2) << "Regridding geography mask data " << info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << regrid_info.get_str() << ".\n"; dp = met_regrid(dp, mtddf->grid(), vx_grid, regrid_info); } } diff --git a/src/libcode/vx_statistics/read_climo.cc b/src/libcode/vx_statistics/read_climo.cc index f5a0f2db71..8f8ddd8e9b 100644 --- a/src/libcode/vx_statistics/read_climo.cc +++ b/src/libcode/vx_statistics/read_climo.cc @@ -39,8 +39,11 @@ static DataPlane climo_hms_interp( //////////////////////////////////////////////////////////////////////// -DataPlane read_climo_data_plane(Dictionary *dict, int i_vx, - unixtime vld_ut, const Grid &vx_grid, +DataPlane read_climo_data_plane(Dictionary *dict, + const char *entry_name, + int i_vx, + unixtime vld_ut, + const Grid &vx_grid, const char *desc) { DataPlane dp; DataPlaneArray dpa; @@ -49,7 +52,8 @@ DataPlane read_climo_data_plane(Dictionary *dict, int i_vx, if(!dict) return dp; // Read array of climatology fields - dpa = read_climo_data_plane_array(dict, i_vx, vld_ut, vx_grid, desc); + dpa = read_climo_data_plane_array(dict, entry_name, i_vx, + vld_ut, vx_grid, desc); // Check for multiple matches if(dpa.n_planes() > 1) { @@ -66,82 +70,120 @@ DataPlane read_climo_data_plane(Dictionary *dict, int i_vx, //////////////////////////////////////////////////////////////////////// -DataPlaneArray read_climo_data_plane_array(Dictionary *dict, int i_vx, +DataPlaneArray read_climo_data_plane_array(Dictionary *dict, + const char *climo_name, + int i_vx, unixtime vld_ut, const Grid &vx_grid, const char *desc) { + + const char *method_name = "read_climo_data_plane_array() -> "; + + // + // Parse each of the climatology configuration entries separately + // using the "climo_name.entry_name" scope notation. Use the value + // from the specified dictionary (e.g. "fcst.climo_mean") if found, + // or use the value from the parent dictionary (e.g. top-level config + // "climo_mean") if not found. + // DataPlaneArray dpa; - StringArray climo_files; - RegridInfo regrid_info; - InterpMthd time_interp; - GrdFileType ctype; - double day_interval, hour_interval; - int i, day_ts, hour_ts; + ConcatString cs; // Check for null if(!dict) return dpa; - // Get the i-th array entry - Dictionary i_dict = parse_conf_i_vx_dict(dict, i_vx); - - // Climatology mean and standard deviation files - climo_files = i_dict.lookup_string_array(conf_key_file_name, false); + // Parse the "file_name" array entry + cs << cs_erase << climo_name << "." << conf_key_file_name; + StringArray climo_files(dict->lookup_string_array(cs.c_str())); - // Check for at least one file + // Check for at least one input file if(climo_files.n() == 0) return dpa; - // Regrid info - regrid_info = parse_conf_regrid(&i_dict); + // Parse the "field" array entry + cs << cs_erase << climo_name << "." << conf_key_field; + Dictionary *field_dict = dict->lookup_array(cs.c_str(), false); + + // Determine which climo array entry to use + int i_climo_field = bad_data_int; + if(field_dict->n_entries() == 0) return dpa; + else if(field_dict->n_entries() == 1) i_climo_field = 0; + else i_climo_field = i_vx; + + // Parse the climo dictionary + Dictionary i_dict = parse_conf_i_vx_dict(field_dict, i_climo_field); + + // Parse the "regrid" dictionary from the top-level + // config file context (e.g. "config.climo_mean.regrid") + // to serve as the default. + RegridInfo regrid_default = parse_conf_regrid( + dict->parent()->lookup_dictionary(climo_name, false)); - // Time interpolation - time_interp = int_to_interpmthd(i_dict.lookup_int(conf_key_time_interp_method)); + // Parse the "time_interp_method" + cs << cs_erase << climo_name << "." << conf_key_time_interp_method; + InterpMthd time_interp = int_to_interpmthd(dict->lookup_int(cs.c_str())); - // Day interval - day_interval = i_dict.lookup_double(conf_key_day_interval); + // Parse the "day_interval" value + cs << cs_erase << climo_name << "." << conf_key_day_interval; + double day_interval = dict->lookup_double(cs.c_str()); - // Range check day_interval + // Range check day_interval value if(!is_bad_data(day_interval) && day_interval < 1) { - mlog << Error << "\nread_climo_data_plane_array() -> " + mlog << Error << "\n" << method_name << "The " << conf_key_day_interval << " entry (" << day_interval << ") can be set to " << na_str << " or a value of at least 1.\n\n"; exit(1); } - // Hour interval - hour_interval = i_dict.lookup_double(conf_key_hour_interval); + // Parse the "hour_interval" value + cs << cs_erase << climo_name << "." << conf_key_hour_interval; + double hour_interval = dict->lookup_double(cs.c_str()); // Range check hour_interval if(!is_bad_data(hour_interval) && (hour_interval <= 0 || hour_interval > 24)) { - mlog << Error << "\nread_climo_data_plane_array() -> " + mlog << Error << "\n" << method_name << "The " << conf_key_hour_interval << " entry (" << hour_interval << ") can be set to " << na_str << " or a value between 0 and 24.\n\n"; exit(1); } - // Check if file_type was specified - ctype = parse_conf_file_type(&i_dict); + // Log search criteria + if(mlog.verbosity_level() >= 5) { + mlog << Debug(5) + << "Searching " << climo_files.n() + << " file(s) for " << desc + << " data using climo_name = " << climo_name + << ", i_vx = " << i_vx + << ", valid time = " << unix_to_yyyymmdd_hhmmss(vld_ut) + << ", time_interp = " << interpmthd_to_string(time_interp) + << ", day_interval = " << day_interval + << ", hour_interval = " << hour_interval + << "\n"; + } // Store the time steps in seconds - day_ts = (is_bad_data(day_interval) ? bad_data_int : - nint(day_interval * 24.0 * sec_per_hour)); - hour_ts = (is_bad_data(hour_interval) ? bad_data_int : - nint(hour_interval * sec_per_hour)); - + int day_ts = (is_bad_data(day_interval) ? bad_data_int : + nint(day_interval * 24.0 * sec_per_hour)); + int hour_ts = (is_bad_data(hour_interval) ? bad_data_int : + nint(hour_interval * sec_per_hour)); + + // Check if file_type was specified + GrdFileType ctype = parse_conf_file_type(&i_dict); + // Search the files for the requested records - for(i=0; ifile_type()); + info->set_default_regrid(regrid_default); info->set_dict(*dict); // Read data planes @@ -226,9 +269,10 @@ void read_climo_file(const char *climo_file, GrdFileType ctype, if(!(mtddf->grid() == vx_grid)) { mlog << Debug(2) << "Regridding " << clm_ut_cs << " " << desc << " field " << info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << info->regrid().get_str() << ".\n"; dp = met_regrid(clm_dpa[i], mtddf->grid(), vx_grid, - regrid_info); + info->regrid()); } else { dp = clm_dpa[i]; diff --git a/src/libcode/vx_statistics/read_climo.h b/src/libcode/vx_statistics/read_climo.h index 362efa3fce..64db97c04a 100644 --- a/src/libcode/vx_statistics/read_climo.h +++ b/src/libcode/vx_statistics/read_climo.h @@ -18,13 +18,15 @@ //////////////////////////////////////////////////////////////////////// -extern DataPlane read_climo_data_plane(Dictionary *, int, - unixtime, const Grid &, - const char *); - -extern DataPlaneArray read_climo_data_plane_array(Dictionary *, int, - unixtime, const Grid &, - const char *); +extern DataPlane read_climo_data_plane( + Dictionary *, const char *, + int, unixtime, const Grid &, + const char *); + +extern DataPlaneArray read_climo_data_plane_array( + Dictionary *, const char *, + int, unixtime, const Grid &, + const char *); //////////////////////////////////////////////////////////////////////// diff --git a/src/tools/core/ensemble_stat/ensemble_stat.cc b/src/tools/core/ensemble_stat/ensemble_stat.cc index 826b8eaf7a..8a92272d75 100644 --- a/src/tools/core/ensemble_stat/ensemble_stat.cc +++ b/src/tools/core/ensemble_stat/ensemble_stat.cc @@ -633,7 +633,8 @@ bool get_data_plane(const char *infile, GrdFileType ftype, if(do_regrid && !(mtddf->grid() == grid)) { mlog << Debug(1) << "Regridding field \"" << info->magic_str() - << "\" to the verification grid.\n"; + << "\" to the verification grid using " + << info->regrid().get_str() << ".\n"; dp = met_regrid(dp, mtddf->grid(), grid, info->regrid()); } @@ -691,7 +692,8 @@ bool get_data_plane_array(const char *infile, GrdFileType ftype, mlog << Debug(1) << "Regridding " << dpa.n_planes() << " field(s) \"" << info->magic_str() - << "\" to the verification grid.\n"; + << "\" to the verification grid using " + << info->regrid().get_str() << ".\n"; // Loop through the forecast fields for(i=0; imagic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << info->regrid().get_str() << ".\n"; dp = met_regrid(dp, mtddf->grid(), grid, info->regrid()); } diff --git a/src/tools/core/grid_stat/grid_stat_conf_info.cc b/src/tools/core/grid_stat/grid_stat_conf_info.cc index 6ec2dd8f98..19c5a48e83 100644 --- a/src/tools/core/grid_stat/grid_stat_conf_info.cc +++ b/src/tools/core/grid_stat/grid_stat_conf_info.cc @@ -211,7 +211,8 @@ void GridStatConfInfo::process_config(GrdFileType ftype, vx_opt = new GridStatVxOpt [n_vx]; // Check for consistent number of climatology fields - check_climo_n_vx(&conf, n_vx); + check_climo_n_vx(fdict, n_vx); + check_climo_n_vx(odict, n_vx); // Parse settings for each verification task for(i=0; iset_default_regrid(regrid_info); + obs_info->set_default_regrid(regrid_info); + // Set the VarInfo objects fcst_info->set_dict(fdict); obs_info->set_dict(odict); diff --git a/src/tools/core/mode/mode_exec.cc b/src/tools/core/mode/mode_exec.cc index 578c92acb7..2ee853f30b 100644 --- a/src/tools/core/mode/mode_exec.cc +++ b/src/tools/core/mode/mode_exec.cc @@ -292,7 +292,8 @@ void ModeExecutive::setup_traditional_fcst_obs_data() if ( !(fcst_mtddf->grid() == grid) ) { mlog << Debug(1) << "Regridding forecast " << engine.conf_info.Fcst->var_info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << engine.conf_info.Fcst->var_info->regrid().get_str() << ".\n"; Fcst_sd.data = met_regrid(Fcst_sd.data, fcst_mtddf->grid(), grid, engine.conf_info.Fcst->var_info->regrid()); } @@ -302,7 +303,8 @@ void ModeExecutive::setup_traditional_fcst_obs_data() if ( !(obs_mtddf->grid() == grid) ) { mlog << Debug(1) << "Regridding observation " << engine.conf_info.Obs->var_info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << engine.conf_info.Obs->var_info->regrid().get_str() << ".\n"; Obs_sd.data = met_regrid(Obs_sd.data, obs_mtddf->grid(), grid, engine.conf_info.Obs->var_info->regrid()); } @@ -454,7 +456,8 @@ void ModeExecutive::setup_multivar_fcst_data(const Grid &verification_grid, if ( !(input._grid == grid) ) { mlog << Debug(1) << "Regridding forecast " << engine.conf_info.Fcst->var_info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << engine.conf_info.Fcst->var_info->regrid().get_str() << ".\n"; Fcst_sd.data = met_regrid(Fcst_sd.data, input._grid, grid, engine.conf_info.Fcst->var_info->regrid()); } @@ -520,7 +523,8 @@ void ModeExecutive::setup_multivar_obs_data(const Grid &verification_grid, if ( !(input._grid == grid) ) { mlog << Debug(1) << "Regridding observation " << engine.conf_info.Obs->var_info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << engine.conf_info.Obs->var_info->regrid().get_str() << ".\n"; Obs_sd.data = met_regrid(Obs_sd.data, input._grid, grid, engine.conf_info.Obs->var_info->regrid()); } diff --git a/src/tools/core/point_stat/point_stat.cc b/src/tools/core/point_stat/point_stat.cc index bda41ddf26..9e47b3181e 100644 --- a/src/tools/core/point_stat/point_stat.cc +++ b/src/tools/core/point_stat/point_stat.cc @@ -649,7 +649,8 @@ void process_fcst_climo_files() { mlog << Debug(1) << "Regridding " << fcst_dpa.n_planes() << " forecast field(s) for " << fcst_info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << fcst_info->regrid().get_str() << ".\n"; // Loop through the forecast fields for(j=0; j " << R"(when using the "-paired" command line option, the )" - << "the file list length (" << fcst_files.n() + << "file list length (" << fcst_files.n() << ") and series length (" << n_series_pair << ") must match.\n\n"; usage(); @@ -562,7 +562,8 @@ void get_series_data(int i_series, mlog << Debug(2) << "Regridding forecast " << fcst_info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << fcst_info->regrid().get_str() << ".\n"; fcst_dp = met_regrid(fcst_dp, fcst_grid, grid, fcst_info->regrid()); } @@ -582,7 +583,8 @@ void get_series_data(int i_series, mlog << Debug(2) << "Regridding observation " << obs_info->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << obs_info->regrid().get_str() << ".\n"; obs_dp = met_regrid(obs_dp, obs_grid, grid, obs_info->regrid()); } @@ -859,11 +861,14 @@ void process_scores() { // Loop over the series variable for(int i_series=0; i_series 1 ? i_series : 0); + int i_obs = (conf_info.get_n_obs() > 1 ? i_series : 0); // Store the current VarInfo objects - fcst_info = conf_info.fcst_info[i_fcst]; + fcst_info = (conf_info.get_n_fcst() > 1 ? + conf_info.fcst_info[i_series] : + conf_info.fcst_info[0]); obs_info = (conf_info.get_n_obs() > 1 ? conf_info.obs_info[i_series] : conf_info.obs_info[0]); @@ -898,22 +903,26 @@ void process_scores() { // Read forecast climatology data fcmn_dp = read_climo_data_plane( - conf_info.conf.lookup_array(conf_key_fcst_climo_mean_field, false), + conf_info.conf.lookup_dictionary(conf_key_fcst), + conf_key_climo_mean, i_fcst, fcst_dp.valid(), grid, "forecast climatology mean"); fcsd_dp = read_climo_data_plane( - conf_info.conf.lookup_array(conf_key_fcst_climo_stdev_field, false), + conf_info.conf.lookup_dictionary(conf_key_fcst), + conf_key_climo_stdev, i_fcst, fcst_dp.valid(), grid, "forecast climatology standard deviation"); // Read observation climatology data ocmn_dp = read_climo_data_plane( - conf_info.conf.lookup_array(conf_key_obs_climo_mean_field, false), - i_fcst, fcst_dp.valid(), grid, + conf_info.conf.lookup_dictionary(conf_key_obs), + conf_key_climo_mean, + i_obs, fcst_dp.valid(), grid, "observation climatology mean"); ocsd_dp = read_climo_data_plane( - conf_info.conf.lookup_array(conf_key_obs_climo_stdev_field, false), - i_fcst, fcst_dp.valid(), grid, + conf_info.conf.lookup_dictionary(conf_key_obs), + conf_key_climo_stdev, + i_obs, fcst_dp.valid(), grid, "observation climatology standard deviation"); bool fcmn_flag = !fcmn_dp.is_empty(); diff --git a/src/tools/core/series_analysis/series_analysis_conf_info.cc b/src/tools/core/series_analysis/series_analysis_conf_info.cc index 2e032256a0..fd19bf61bc 100644 --- a/src/tools/core/series_analysis/series_analysis_conf_info.cc +++ b/src/tools/core/series_analysis/series_analysis_conf_info.cc @@ -212,8 +212,9 @@ void SeriesAnalysisConfInfo::process_config(GrdFileType ftype, exit(1); } - // Check climatology fields - check_climo_n_vx(&conf, n_fcst); + // Check for consistent number of climatology fields + check_climo_n_vx(fdict, n_fcst); + check_climo_n_vx(odict, n_obs); // Allocate space based on the number of verification tasks fcst_info = new VarInfo * [n_fcst]; diff --git a/src/tools/core/wavelet_stat/wavelet_stat.cc b/src/tools/core/wavelet_stat/wavelet_stat.cc index 27d868d78b..648d5fd9b6 100644 --- a/src/tools/core/wavelet_stat/wavelet_stat.cc +++ b/src/tools/core/wavelet_stat/wavelet_stat.cc @@ -300,7 +300,8 @@ void process_scores() { if(!(fcst_mtddf->grid() == grid)) { mlog << Debug(1) << "Regridding forecast " << conf_info.fcst_info[i]->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << conf_info.fcst_info[i]->regrid().get_str() << ".\n"; fcst_dp = met_regrid(fcst_dp, fcst_mtddf->grid(), grid, conf_info.fcst_info[i]->regrid()); } @@ -326,7 +327,8 @@ void process_scores() { if(!(obs_mtddf->grid() == grid)) { mlog << Debug(1) << "Regridding observation " << conf_info.obs_info[i]->magic_str() - << " to the verification grid.\n"; + << " to the verification grid using " + << conf_info.obs_info[i]->regrid().get_str() << ".\n"; obs_dp = met_regrid(obs_dp, obs_mtddf->grid(), grid, conf_info.obs_info[i]->regrid()); } diff --git a/src/tools/other/gen_ens_prod/gen_ens_prod.cc b/src/tools/other/gen_ens_prod/gen_ens_prod.cc index b088c74dcf..9f36c55ad3 100644 --- a/src/tools/other/gen_ens_prod/gen_ens_prod.cc +++ b/src/tools/other/gen_ens_prod/gen_ens_prod.cc @@ -468,18 +468,20 @@ void get_climo_mean_stdev(GenEnsProdVarInfo *ens_info, int i_var, << ens_info->get_var_info(i_ens)->magic_str() << "\".\n"; cmn_dp = read_climo_data_plane( - conf_info.conf.lookup_array(conf_key_climo_mean_field, false), + conf_info.conf.lookup_dictionary(conf_key_ens), + conf_key_climo_mean, i_var, ens_valid_ut, grid, - "climatology mean"); + "ensemble climatology mean"); mlog << Debug(4) << "Reading climatology standard deviation data for ensemble field \"" << ens_info->get_var_info(i_ens)->magic_str() << "\".\n"; csd_dp = read_climo_data_plane( - conf_info.conf.lookup_array(conf_key_climo_stdev_field, false), + conf_info.conf.lookup_dictionary(conf_key_ens), + conf_key_climo_stdev, i_var, ens_valid_ut, grid, - "climatology standard deviation"); + "ensemble climatology standard deviation"); // Unset the MET_ENS_MEMBER_ID environment variable if(set_ens_mem_id) { @@ -647,7 +649,8 @@ bool get_data_plane(const char *infile, GrdFileType ftype, if(!(mtddf->grid() == grid)) { mlog << Debug(1) << "Regridding field \"" << info->magic_str() - << "\" to the verification grid.\n"; + << "\" to the verification grid using " + << info->regrid().get_str() << ".\n"; dp = met_regrid(dp, mtddf->grid(), grid, info->regrid()); } diff --git a/src/tools/other/grid_diag/grid_diag.cc b/src/tools/other/grid_diag/grid_diag.cc index bb444f2061..cd5ddc843b 100644 --- a/src/tools/other/grid_diag/grid_diag.cc +++ b/src/tools/other/grid_diag/grid_diag.cc @@ -291,7 +291,8 @@ void process_series(void) { if(!(cur_grid == grid)) { mlog << Debug(2) << "Regridding field " << data_info->magic_str_attr() - << " to the verification grid.\n"; + << " to the verification grid using " + << data_info->regrid().get_str() << ".\n"; data_dp[i_var] = met_regrid(data_dp[i_var], cur_grid, grid, data_info->regrid()); diff --git a/src/tools/other/plot_point_obs/plot_point_obs_conf_info.cc b/src/tools/other/plot_point_obs/plot_point_obs_conf_info.cc index 23311d2e3c..4a06fd3e89 100644 --- a/src/tools/other/plot_point_obs/plot_point_obs_conf_info.cc +++ b/src/tools/other/plot_point_obs/plot_point_obs_conf_info.cc @@ -486,7 +486,8 @@ void PlotPointObsConfInfo::process_config( // Regrid, if requested if(grid_data_info->regrid().enable) { mlog << Debug(1) << "Regridding field " - << grid_data_info->magic_str() << ".\n"; + << grid_data_info->magic_str() << " using " + << grid_data_info->regrid().get_str() << ".\n"; Grid to_grid(parse_vx_grid(grid_data_info->regrid(), &grid, &grid)); grid_data = met_regrid(grid_data, grid, to_grid, diff --git a/src/tools/tc_utils/tc_diag/tc_diag.cc b/src/tools/tc_utils/tc_diag/tc_diag.cc index 0b551fd1f2..c4b76a7cdd 100644 --- a/src/tools/tc_utils/tc_diag/tc_diag.cc +++ b/src/tools/tc_utils/tc_diag/tc_diag.cc @@ -2286,9 +2286,8 @@ void TmpFileInfo::write_nc_data(const VarInfo *vi, const DataPlane &dp_in, RegridInfo ri = vi->regrid(); mlog << Debug(4) << "Regridding \"" << vi->magic_str() - << "\" to the \"" << domain << "\" domain using the " - << interpmthd_to_string(ri.method) << "(" << ri.width - << ") interpolation method.\n"; + << "\" to the \"" << domain << "\" domain using " + << ri.get_str() << ".\n"; // Do the cylindrical coordinate transformation if(dp_in.nxy() > 0) {