-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various solidcp updates #218
base: master
Are you sure you want to change the base?
Conversation
… Ubuntu 22.04.3 only
@@ -226,7 +229,7 @@ | |||
|
|||
<div class="text-right"> | |||
<CPCC:StyleButton id="btnCancel" CssClass="btn btn-warning" runat="server" CausesValidation="False" OnClick="btnCancel_Click"> <i class="fa fa-times"> </i> <asp:Localize runat="server" meta:resourcekey="btnCancel"/> </CPCC:StyleButton> | |||
<CPCC:StyleButton id="btnUpdate" CssClass="btn btn-success" runat="server" meta:resourcekey="btnUpdate" OnClick="btnUpdate_Click" ValidationGroup="Vps" OnClientClick="if(!confirm('Before applying new configuration VPS could be stopped.\n\nAfter the configuration is changed it will be started again automatically.\n\nDo you want to proceed?')) return false; ShowProgressDialog('Updating configuration...');"> <i class="fa fa-refresh"> </i> <asp:Localize runat="server" meta:resourcekey="btnUpdateText"/> </CPCC:StyleButton> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
meta:resourcekey="btnUpdate" is used for localizing JavaScript dialog (btnUpdate.OnClientClick). There is no german confirm dialog when it is removed
Hi @spardoko , thank you for the pull request. |
Ideally we should be setting these via the provider settings not the web.config so it can be adjusted per server. Settings options in web.config should only be done when its not possible via other methods(That does go for other settings). As we move toward the CoreWCF version we will be cleaning up those settings. |
Hello, Can’t we achieve this by extending quotas with add-ons in the existing system? I remember starting work on this but didn’t finish for reasons I can’t recall. UPD: I found my try SolidCP/SolidCP/Sources/SolidCP.WebPortal/DesktopModules/SolidCP/UserCreateSpace.ascx.cs Lines 254 to 260 in 31119db
commit -> bd6c69f I had trouble understanding how to properly extend the existing quota after adding add-ons, and how to revert it after removing an add-on. I think it would be better to continue in that direction rather than adding something that bypasses the existing systems. P.S. Anyway, this PR looks too big and it's probably better to split it up. |
In regards to added configurable limits to vps allowing for overhead... Yes, the RAM and HDD are soft limits identified by text only. The purpose of those limits are to let users know the upper and lower limits for each individual server regardless of how many resources are available within their plan. I also want to point out that if the web.config entries are missing then the behavior does not change. In other words, this commit won't change the behavior for other SolidCP users unless they define the web.config entries. When an account has one VM, this feature offers little difference other than overriding the max amount of CPUs available to assign to the VM and displaying the limits in text form. The primary purpose for this feature is when there are multiple VMs per account. The VMs can span multiple servers which is why I chose to put them in the web.config versus provider settings but I'd be happy to improve on it if a better way is suggested. Unlike RAM and HDD, the CPU is a hard limit with this commit. With the original behavior, assigning a number of CPUs per account applies to each VM within the account. In other words, if I assign X CPUs to an account through addons then every VM within the account can have a total of X CPUs. As an example, if I assign 8 CPUs to an account and the account has four VMs then each VM can have 8 CPUs for a total assignment of 32 CPUs. With this commit, the amount of available CPU is decremented after it is assigned to a VM. As an example, if I assign 8 CPUs to account through addons and the account has four VMs then the total amount of CPU that can be assigned to all four VMs is 8 CPU. If I assign 2 CPU to the first VM there will be a total of 6 available for assignment to the second VM. If I assign 4 CPU to the second VM then there will only be 2 CPU available to split between the third and fourth VM. Of course, each VM will also be limited to a max number of CPUs defined in the web.config. This will be true regardless of whether those VMs span multiple servers. Unlike CPU, available RAM and HDD have always decremented with each new VM within an account. That's partially why I left the per VM limits as soft limits for now. I can make per VM limits hard limits sometime in the future. Our VM plans within WHMCS are based on resources (CPU, RAM and HDD). Our clients buy those resources individually. If our client wants to buy two more procs to add to one of their servers then this commit allows them to do that. Before this commit, there was no way to do that unless there was only one server in the account (CPU count applied to all servers). We also don't want our clients to create VMs outside of the defined limits. We allow them to buy lots of resources (CPU, RAM and HDD)because they can spread them out over lots of VMs. However, we want to limit the amount of resources assigned per individual VM. |
So, is it possible to bypass the set quota? If that’s the case, maybe it’s better to fix that issue instead of creating a rather odd solution that doesn’t address the main problem—“quota bypass”? UPD: |
It's still limited to the quotas defined within the plan. For a long time, we only deployed one server per hosting space but this has a lot of limitations too. |
We also currently use admin only templates in production. For us, we mostly use it when importing VMs with OS's that we don't offer our clients for deployment. And, for VMs that are currently deployed but we no longer want to make available for clients to deploy. |
For importing, it doesn't matter which OS you choose, it's just a value in the database. Also, if you stopped providing a certain OS, wouldn't it be easier to remove it? Or do you allow using an old/hidden OS for reinstallation? In that case, your solution doesn't solve the problem, and clients will be able to bypass the restrictions by reinstalling the server. |
For importing, we still have to select an OS and while it may not matter from a functional perspective, the OS selection displays on the VM configuration page so it's nice to have it match the actual OS. This feature is simply a way to differentiate between which images are available to admins versus clients. It can be utilized in many different ways. We deploy images as admins that we don't want to be available for self deployment by clients. It could be older or other OS versions. For example, we prefer to only allow clients to install the latest version of Windows server. However, from time to time clients request an older version of Windows for whatever reason. If we agree, we simply deploy it for them as the admin. We also retain some custom images for special services that we deploy and further customize as admins before we release to the client. We mark these as admin only. Admin only OS templates are a good solution for these and many other scenarios. If I am missing something, please explain. |
Alright, but your implementation still doesn’t work for server reinstallation. Also, SolidCP has not only administrators but many other roles, such as CSR, Reseller, etc., yet you’ve strictly blocked access only for administrators. For example, in our case, the administrator role isn’t used at all (except for connecting and setting up new nodes); everything is handled through the CSR/Reseller role or its equivalent. This also means the parameter name "Administrators only" isn’t very suitable here. Perhaps it would be better to use something like "Disabled/Hidden" instead. And I really don’t like the solution for addressing the quota bypass issue. |
It wasn't intended for non-administrators. It'll work for reinstallation if you're an administrator. Hence, the name "Administrators Only." I am aware of all the other roles. I don't get your point. It's intended for only Admins. If I would've allowed for other roles, I wouldn't have labeled it "Administrators Only." It doesn't sound like you would want to use the feature even as a CSR/Reseller so I a don't plan to expand the feature to other roles. Or, maybe you realized it's a good feature and want to use it. If so, just say that and I'll expand it to allow for more detailed selection of roles. I am not going to label it "Disabled/Hidden" when it's not disabled or hidden for admins. I imagine you don't like my solution for the quota bypass issue since I have not written any code for any such issue. My code does not bypass any quotas. It fixes a flawed implementation of CPU distribution and allows for per vm limits within the existing quotas - albeit soft limits. If you don't like it, don't use it. Quite frankly, I don't have a clue what you mean by quota bypass issue. Based on your words, it seems like some kind of issue that you couldn't figure out how to solve. If you want to share with me the details then I'll look into it and contribute what I can. |
I’m sorry, but the "if you don’t like it, don’t use it" approach is poor practice. You’ve provided a solution that doesn’t fix the problem but instead adds an optional extra layer that feels out of place and, most importantly, isn’t intuitive. Who will maintain its functionality in the future? If you’ve found an issue with quotas, you should resolve it (or create an issue with a description and reproduction steps), rather than creating such an odd workaround that requires manual file editing. It looks like this was designed solely for your unique use case, ignoring all other users of this product. Regarding the "Admin OS Template," I somewhat like the idea of hiding OS images from clients, but your implementation assumes that only administrators will handle tasks like reinstalling or creating servers (with hidden OS). However, using an administrator for such minor tasks is unjustified and even risky due to the administrator’s extensive permissions. A simple human error (like clicking the wrong button) could lead to serious consequences. Either way, I’ve shared my opinion; the final decision is up to the project maintainer. |
If there is a quota issue it should be fixed in the enterprise server not via a work around in the web portal. If its only fixed in the web then it can still be broken if using the api etc. The idea of Hidden templates is a good idea but maybe this needs to be more Hosting plan based? Something like: Show hidden templates and then it can apply to resellers etc. If a serveradmin does load a customer it should always show these. |
Hi, we also use SolidCP with WHMCS and we haven't had any problems with CPU quotas so far. We packed all HyperVs into a virtual server, then created a hosting plan with it and added add-ons like additional vCores, RAM, HDD, etc. Can you describe how to reproduce the bug with CPU quotas? |
Hi! First of all, I agree with all of you that the configurable limits commit might not be the best approach. I have been looking into how to make a better solution. I do believe the parameters should be removed from the web.config and integrated in a hosting plan policy with some parts being part of the provider and some with the enterprise server so that quotas can be enforced on API calls. However, it has been working for me as it stands and solves many of my issues. Let me explain the CPU quota issue in more detail... In my SolidCP Hosting Plan for VMs, I set "Number of Servers" to unlimited. Or, any number more than 1 will do. I set "Number of CPU cores" to 0. I have a SolidCP addon and set "Number of CPU cores" to 1. I use configurable options in whmcs so that clients can purchase a desired number of CPU cores. Let's say a client purchases 4 CPU cores through whmcs. Whmcs then provions the SolidCP addon with a quantity of 4. Now, the clients SolidCP Hosting space has unlimited servers and 4 CPU cores. This is all fine so far. The problem now is that the client can create as many VMs as they want with each VM having between 1 and 4 CPUs. They could spin up 10 servers with each having 4 CPUs if they want. If they go back into whmcs and increase the number of CPU cores from 4 to 8 then they can come back into SolidCP and edit the configuration of all 10 VMs and give each of the them between 1 and 8 CPU cores. This commit changes that behavior. It allows the client to provision only the exact number of CPU cores that they purchase. So, for example, if the client purchases 4 CPU cores in whmcs and they spin up 1 server in SolidCP with 4 CPU cores then they will not be able to spin up any more servers until they buy more CPU cores in whmcs. If the client goes back into whmcs and increases the number of CPU cores from 4 to 8 and then goes back to SolidCP to spin up another server then it will only allow them to add between 1 and 4 CPU cores to the new server because the first server is already using 4 of the 8 CPU cores that were purchased. That's the first CPU issue. I hope my explanation makes sense. Further, this commit addresses another problem related to CPU cores. Currently, each VM is limited to the number of physical cores in one processor of the host server. At least that's how it seems to work for me. For example, if I have a physical host with 4 x 12 core procs with hyperthreading then the number of CPU cores available to provision to any one VM in SolidCP on this host will be limited to 12. Although the server itself will register 96 processors with hyperthreading. Hyper-V allows for oversubscription of CPU. SolidCP does not. I should be able to assign as many to CPU cores to a VM as I want rather than be limited to the number of physical cores by SolidCP, of course within the limits of the defined SolidCP quotas. We monitor the utilization of resources in an entirely different system which is what I assume most of you do. Let's say I create a VM in MSVMM with 48 processors on the host described above and then import the VM into SolidCP. If you go into the configuration page of the VM after it has been imported into SolidCP it will display 48 CPU cores. However, if you then click edit configuration, the dropdown for CPU cores will only display a maximum of 12 CPU cores which only truly becomes apparent after you click the dropdown. If the client adds some additional RAM or something else to the VM and saves the configuration, the VM will immediately be knocked down from 48 to 12 CPU cores and it's not immediately apparent. This commit changes that behavior. It allows the computed number of max CPU cores to be overridden by a value set by the administrator. In the example above, it allows me to provision the VM with 48 cores directly in SolidCP and, further, it will not automatically change the value when the configuration is edited. Keep in mind that this example is only possible if the per vm limit is set to at least 48, the CPU core addon quota is at least a quantity of 48 and there are at least 48 unprovisioned CPUs in the hosting space. In other words, the quotas in the plan still apply. Setting the per VM limit to 48 as described in the example above will not change anything unless the clients CPU quota exceeds the number of physical CPU cores of one processor of their host. If the number of CPU cores is set to 4 in the solidcp hosting plan then only 4 CPU cores will be available for provisioning as long as the per vm limit is set higher. I hope this explanation makes sense too. This solves an issue that has been a real problem for me in the past. We had to prevent clients with large amounts of CPU from editing their own configuration because SolidCP kept modifying the CPU core count on the VM whenever the client would save the configuration. So when those clients wanted to increase RAM or HDD or whatever then we would have to do it manually in MSVMM and then execute a script to modify the SolidCP database with the new value. Now, clients can modify their own VM configuration without concern for the number of CPU cores changing on their VM. |
@FuseCP-TRobinson Thanks, also, the same needs to be done in VpsDetailsEditConfiguration.ascx.cs |
@FuseCP-TRobinson I agree that the setting should be made in a policy rather than web.config. I don't agree with the fix that you guys made though. It does not solve the problem. It only solves the one issue you tested for. It literally just bypasses the maxcores calculation when there is unused quota. |
I agree it fixes the issue but causes another one where maxCores is now not taken into account which means if a user is to set it above the allowed cores the VM wont start with no clear reason as to why. |
Let’s try to summarize what you are trying to achieve. But doesn’t this code count them correctly? SolidCP/SolidCP/Sources/SolidCP.Providers.Virtualization.HyperV-2012R2/HyperV2012R2.cs Lines 2146 to 2154 in 3de665b
From the code, I can see that it loops through each processor and sums up all their threads. Isn’t that happening in your case? If not, then maybe my fix really isn’t suitable (and perhaps the changes should be reverted), but we still need to figure out why it’s not summing up all CPU threads in your case. |
I think for VpsDetailsEditConfiguration.ascx.cs we should set this to:
This will make sure the drop down is limited to maxCores |
@FuseCP-TRobinson UPD: |
@berkut1 My last comment was based on my test machine which is limited to 10 CPUs and max cores is ignored. With the changes i made it fixed that issue to where it limis the drop down to 10CPUs when an quota is above the maxCores. I can also see on another server its 40 cores total while the drop down still shows 48 (Without the changes).
|
@FuseCP-TRobinson // bind CPU cores
int maxCores = ES.Services.VPS2012.GetMaximumCpuCoresNumber(vm.PackageId);
PackageContext cntx = PackagesHelper.GetCachedPackageContext(PanelSecurity.PackageId);
QuotaValueInfo cpuQuota2 = cntx.Quotas[Quotas.VPS2012_CPU_NUMBER];
int cpuQuotausable = (cpuQuota2.QuotaAllocatedValue - cpuQuota2.QuotaUsedValue) + vm.CpuCores;
if (cpuQuota2.QuotaAllocatedValue == -1)
{
for (int i = 1; i < maxCores + 1; i++)
ddlCpu.Items.Add(i.ToString());
ddlCpu.SelectedIndex = ddlCpu.Items.Count - 1; // select last (maximum) item
}
else if (cpuQuota2.QuotaAllocatedValue >= cpuQuota2.QuotaUsedValue)
{
if (cpuQuotausable > maxCores)
{
for (int i = 1; i < maxCores + 1; i++)
ddlCpu.Items.Add(i.ToString());
ddlCpu.SelectedIndex = ddlCpu.Items.Count - 1; // select last (maximum) item
}
else
{
for (int i = 1; i < cpuQuotausable + 1; i++)
ddlCpu.Items.Add(i.ToString());
ddlCpu.SelectedIndex = ddlCpu.Items.Count - 1; // select last (maximum) item
}
}
else
{
for (int i = 1; i < vm.CpuCores + 1; i++)
ddlCpu.Items.Add(i.ToString());
ddlCpu.SelectedIndex = ddlCpu.Items.Count - 1; // select last (maximum) item
} If UPD: $coreCount = 0
$processors = Get-WmiObject -Class Win32_Processor
foreach ($processor in $processors) {
$coreCount += $processor.NumberOfLogicalProcessors
}
Write-Output "NumberOfLogicalProcessors: $coreCount" Just in case, please check also this PowerShell command: [Environment]::ProcessorCount And the last one Powershell script: Add-Type @"
using System;
using System.Runtime.InteropServices;
public class CpuInfo
{
[DllImport("kernel32.dll")]
public static extern void GetSystemInfo(out SYSTEM_INFO lpSystemInfo);
[StructLayout(LayoutKind.Sequential)]
public struct SYSTEM_INFO
{
public ushort wProcessorArchitecture;
public ushort wReserved;
public uint dwPageSize;
public IntPtr lpMinimumApplicationAddress;
public IntPtr lpMaximumApplicationAddress;
public IntPtr dwActiveProcessorMask;
public uint dwNumberOfProcessors;
public uint dwProcessorType;
public uint dwAllocationGranularity;
public ushort wProcessorLevel;
public ushort wProcessorRevision;
}
public static int GetProcessorCoresNumber()
{
SYSTEM_INFO sysInfo;
GetSystemInfo(out sysInfo);
return (int)sysInfo.dwNumberOfProcessors;
}
}
"@
$processorCount = [CpuInfo]::GetProcessorCoresNumber()
Write-Output "NumberOfLogicalProcessors: $processorCount" If you have a unique problem where the values are not calculated correctly, we need to figure out which method will give the correct result. |
@berkut1 @FuseCP-TRobinson On the host I used in the example, it has two physical processors. I had run the WMI command on it before and it produced 64 logical processors. Give me some time and I can try it on bunch of different servers with different procs including one with 4 physical procs. I have not tried the second powershell command but I will try that as well. I can tell you that I've seen anywhere between 2 and 12 cores on different physical hosts, at the most, in the CPU dropdown. Never more than 12. I can tell you that every single one of my servers has at least 48 logical processors. Therefore, when using the commit, I set the static value to 48 in my SolidCP and it solves all the issues. People would need to sort of figure out that number based on their environment if they want to solve all issues with CPU allocation. @berkut1 I completely understand where at with this regarding fixing the bug and I don't disagree with you. The problem is that the only way I could solve the bug was to define a per vm limit. Keep in mind, even if you get maxcores value to match the right logical processor count it is still a problem. This is because you might migrate the VM to a host that has a different logical processor count. What do you do then? |
I reverted the changes on my SolidCP. I think i have found the problem. The maxCores is fetched from the machine running SolidCP Server not the remote HyperV server. |
@FuseCP-TRobinson Yeah, if someone uses Remove Hyper-V instead of installing the SolidCP server on each server, they will get the wrong result. SolidCP/SolidCP/Sources/SolidCP.Providers.Virtualization.HyperV-2012R2/HyperV2012R2.cs Line 1514 in 9bb1b43
We already get the computer name here: SolidCP/SolidCP/Sources/SolidCP.Providers.Virtualization.HyperV-2012R2/HyperV2012R2.cs Lines 136 to 139 in 9bb1b43
|
There won't be any issues if the new host has the same or more CPU cores after we fix the issue with maxCores. |
Hi @berkut1 , we previously had the problem that if a VM was moved to another HV (by Cluster Manager in our case), SolidCP could no longer find and control it. I have made the function in SolidCP, which then reassigns the VM back to the correct HyperV server, but nothing is done with quotas. As I said, we also use SolidCP with WHMCS (a hosting plan is associated with WHMCS product and SolidCP addons with WHMCS configurable options) and we have no problems with quotas so far - it gets maximum vCores quota (hosting plan + addons), either maximum available HV cores displayed. We installed the SolidCP server on all HyperV nodes, then packed all SolidCP servers into a virtual server and we use the virtual server for hosting plans. We didn't have a scenario where a VM could have more vCores than an HV node, since we move a VM either within the cluster (where all HVs have the same processors) or from the old to the new cluster (where all HVs have more cores). |
@spardoko If you want, you can try testing this fix for I currently can't test this on a remote server and have only tested it locally with one processor. In theory, it should work. |
@berkut1 @FuseCP-TRobinson I did not mean to close this pull request again. Sorry about that. Let me try to fix. |
@berkut1 This makes a lot of sense. We use remote server settings. We point the SolidCP Server URL to a high availability cluster of VMs whose purpose is to proxy the hyper-v connections. Those all have the SolidCP server component installed. None of our Hyper-V servers have the SolidCP server component. Currently, all of the VMs in the cluster have 2 procs which is why I only ever see two procs right now. It hasn't always been setup this way though. The varying number of CPU I was seeing in the past was either when it was pointing to another url/server or it landed on a VM in the cluster with a different number of CPUs. I believe your fix will work but testing will take some work on my end. I'll test and let you know if I find any other problems. We still want a feature to define per vm limits, including CPU, but I will look at putting that into a policy and making it better. |
@spardoko Create (different?) limits for each server within a single Hosting Space is, I think, very difficult to implement adequately because limits are tied exclusively to the Hosting Space, not to objects within it. You might explore ideas where a client within one Hosting Space could have child Hosting Spaces, similar to how Resellers work. Following this approach, with one server per Hosting Space, you could manage limits for each server within a single parent Hosting Space for the client. This implementation - making the client function like a Reseller - seems the simplest and most logical solution. Much of the functionality already exists for Resellers, so it would just require extending it to clients and ensuring that it doesn’t break anything else. P.S After writing these thoughts, it seems to me that this is the most correct solution, and you should consider exploring this direction. UPD: On the other hand, why do this if you can simply follow the "one Hosting Space - one server" approach, which would give each server its own limits? Or is the WHMCS module preventing you from doing this? In that case, it might be easier to modify the WHMCS module to work with this scheme. |
@berkut1 No, not at all, I don't mean different limits for each server within a single hosting space. I mean have the ability to create a policy for RAM, HDD and CPU that would apply to all servers within a space. First, keep in mind that we want to have hosting plans as I've described before. Multiple VMs in one plan and use addons for CPU, RAM and HDD that can be applied by our clients however they want. Similar to the screenshots. Clients would be able to buy a bunch of CPU, RAM and HDD and then launch new and edit server configurations however they want and if we apply a policy then they will be forced to follow those rules. Within that space, we may want to cap the maximum CPUs per VM to some number less than maxcores. The quota may say the plan has 100 CPU but the policy may say only apply 10 CPU can be applied to a single VM. And maybe we only want to allow clients to provision RAM in 512 MB increments. And maybe we want to cap HDD on any one VM to 1TB even though the plan quota might be 10 TB. And maybe we want to only allow HDD to be provisioned in 25 GB increments. Or whatever. In any case, plan quotas would still apply. We also want some way to show clients what resources are available and what policies are in place when provisioning/editing - this is especially true for RAM and HDD since they are not a dropdown. Although, if there was a policy in place, I suppose they could be dropdowns too (just thinking out loud). We can accomplish some of that by putting one VM in one space but not all. Unless I am missing something with the one server approach, and please apprise me if I am, clients won't have the ability to share private vlans between servers, be able to switch external IPs from one server to another, are not be able to reallocate resources like CPU and RAM to other servers, etc etc. Those things are possible in a multi VM plan. In any case, it's certainly more user friendly to have all the VMs for a client/space/usegroup whatever all on one page. We actually had a one server per space approach in the past. It simply has not worked for us (and our clients). I appreciate your feedback. |
Alright, I think I vaguely understand your idea. You’re allocating resources to clients but also want to control how they use those resources within their quota. If you can integrate this into SolidCP in a clean way that doesn’t burden the codebase (for example, the web.config approach is a bad example of how not to do it), then go ahead and give it a try. However, in your case, I’d approach it differently. I’d create a separate website and manage all SolidCP resources exclusively through its API. This way, clients would have their own portal with specific rules, like the resource management you described, while SolidCP’s API would handle quota checks. If you’re interested, I’ve built a wrapper for the SolidCP API for my company to simplify automation. https://github.com/berkut1/scpm |
"You’re allocating resources to clients but also want to control how they use those resources within their quota." Yes, exactly. "If you can integrate this into SolidCP in a clean way that doesn’t burden the codebase (for example, the web.config approach is a bad example of how not to do it), then go ahead and give it a try." You don't need to keep beating me up over the approach. I've already agreed the web.config is bad and that I would need to do better. "However, in your case, I’d approach it differently. I’d create a separate website and manage all SolidCP resources exclusively through its API. " Maybe I don't understand but this doesn't seem like a good approach to me. I mean I see the value in what you've created. I just don't think it's right for me. It seems like a lot of work to simply be able to have some control over how clients provision vm resources. Also, seems like I would need to reinvent much of what SolidCP already does. I'd prefer to throw a bunch of resources into SolidCP and give clients the flexibility to use it however they want. I've been using this project in some form or another since the dotnetpanel days. I thought that's how it was intended to be used. |
If you have the determination to do it, then give it a try.
As for your concern that creating a separate management layer seems complicated - after you spend time trying to properly integrate your ideas into SolidCP without breaking anything else, you’ll know SolidCP so well that building a standalone application using its API will feel like a piece of cake.:) |
Edited: No bueno. Holes in my head. |
@BerkU1 My apologies for my comments. I genuinely appreciate your contributions to this project. |
Description
Various updates that mattered to me.
Fixes # (issue)
These commits have been tested and are used in my production environment.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Please also note any relevant details for your test configuration.
Test A
Test B
Built code to ensure it has no errors
Checklist: