Manifest/Pull Request Testing Environment #26267
-
#26251 (review) Here's a discussion where we can talk about and suggest a better way to test manifests. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
The options, as I see them at this point:
(I only put the Windows Container option on here hoping someone from Microsoft would say whether or not they'll ever let me RDP into a container. A winget moderator can dream :D) On Linux I'd just p.s: @denelon, can you share anything about how the validation environment is provisioned during the pipeline run? Is a new VM spun up on Azure for every run? tagging @OfficialEsco @ItzLevvie so they can share their setups. |
Beta Was this translation helpful? Give feedback.
-
I appreciate the list of pro's and con's by @jedieaston for the different choices of environment. I also agree with @OfficialEsco on the validation. Our validation does use a special environment for testing (a VM in a dedicated environment). We are working with several other teams on reporting the findings so we can surface them with the bots. In some cases we simply get a pass/fail or the response from the service times out. There are also concerns about how the data gets reported. We need to make sure the data is formatted in a consistent manner. We are trying to avoid building something completely custom when several other services exist like SmartScreen. Other automated tests produce a high quantity of false negative results. In those cases, we've implemented the waiver system with the .validation files so a person can review the submission against policies to make a determination if a manifest should pass. The engineering team is simply responding to the business decisions when they are adding .validation files with waivers. The challenges we currently have with matching data in Apps & Features will hopefully be resolvable with the new schema and enhancements to validation. We're also working through dependencies so they can also be tested appropriately. Our next release includes a few other improvements in the client, and then we will begin focusing more on the bugs we've been seeing to improve the experience with the client. Most of the challenges we are encountering are due to legacy installers, and the way they report packages once they have been installed. We knew this was going to be a challenge, and we knew we were going to have to work with Independent Software Vendors and publishers to improve the overall customer experience. This is a huge team effort, and it will continue to require collaboration and feedback. Keep submitting Issues for new Bugs and new Feature requests. They are helping to drive the direction of the product, and they are letting us know where we need to spend our engineering resources. |
Beta Was this translation helpful? Give feedback.
I appreciate the list of pro's and con's by @jedieaston for the different choices of environment. I also agree with @OfficialEsco on the validation. Our validation does use a special environment for testing (a VM in a dedicated environment). We are working with several other teams on reporting the findings so we can surface them with the bots. In some cases we simply get a pass/fail or the response from the service times out. There are also concerns about how the data gets reported. We need to make sure the data is formatted in a consistent manner. We are trying to avoid building something completely custom when several other services exist like SmartScreen. Other automated tests produce …