You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 27, 2024. It is now read-only.
"I have a suggestion. I hope you can program a virtual NPU (Neural Processing Unit) that allows large models to run on it, with speeds still faster than pure CPU. The virtual NPU can call upon both the GPU and CPU to work together for acceleration. For example, it could make use of common operation instructions within the CPU and GPU as much as possible. This way, it would be more universal, allowing more older PCs and mobile phones to load more open-source large models. When more people use open-source software, more people will participate in building open-source."
This is an interesting idea! The concept of using a virtual NPU to leverage both CPU and GPU resources for running large models could indeed make it more accessible for devices with limited hardware capabilities. However, implementing such a system would be quite complex and require deep knowledge of hardware architecture, software design, and machine learning frameworks. It would also involve optimizing the distribution of tasks between the CPU and GPU to achieve the best performance. Nevertheless, if successful, it could significantly contribute to the open-source community and make advanced machine learning models more accessible to a broader range of users.
"I have a suggestion. I hope you can program a virtual NPU (Neural Processing Unit) that allows large models to run on it, with speeds still faster than pure CPU. The virtual NPU can call upon both the GPU and CPU to work together for acceleration. For example, it could make use of common operation instructions within the CPU and GPU as much as possible. This way, it would be more universal, allowing more older PCs and mobile phones to load more open-source large models. When more people use open-source software, more people will participate in building open-source."
This is an interesting idea! The concept of using a virtual NPU to leverage both CPU and GPU resources for running large models could indeed make it more accessible for devices with limited hardware capabilities. However, implementing such a system would be quite complex and require deep knowledge of hardware architecture, software design, and machine learning frameworks. It would also involve optimizing the distribution of tasks between the CPU and GPU to achieve the best performance. Nevertheless, if successful, it could significantly contribute to the open-source community and make advanced machine learning models more accessible to a broader range of users.
我有个建议,
希望你编程写一个虚拟NPU,
让大模型运行在虚拟NPU上, 速度还是要比纯CPU快一些的.
虚拟NPU可以调用GPU与CPU一起协同加速.
比如尽可能使用CPU与GPU内的通用操作指令.
这样就能更加的通用.更多老旧的PC,手机,加载更多开源大模型.
当更多人使用开源,就会有更多人参与建设开源.
The text was updated successfully, but these errors were encountered: