OpenWrt双路由DHCPv6-PD前缀委派配置教程
核心方案:配置OpenWRT_B通过DHCPv6-PD获取子前缀 核心关键是要给对应的接口分配一个合适的前缀长度,并确保DHCPv6和RA服务正确启用。,最小也要63,才能给下一级路由器分配64位子网 第一步:配置OpenWRT_B的WAN口(连接OpenWRT_A) 进入OpenWRT_B管理界面 → 网络 → 接口 ...
核心方案:配置OpenWRT_B通过DHCPv6-PD获取子前缀 核心关键是要给对应的接口分配一个合适的前缀长度,并确保DHCPv6和RA服务正确启用。,最小也要63,才能给下一级路由器分配64位子网 第一步:配置OpenWRT_B的WAN口(连接OpenWRT_A) 进入OpenWRT_B管理界面 → 网络 → 接口 ...
为了跟进一下时代,尝尝IPV6的鲜,我打算给我的二级内网搞出IPV6来 因为我是家网络有两级,一级是主路由,他可以获取到由运营商分配的IPV6和IPV6-PD,并且可以给接入他的设备分配一个公网IPV6;还有一级时我书房的,他只能自己获取到一个公网IPV6,给接入设备的却只有一个内网的IPV6 ...
1:生成 SSH 密钥 在本地终端生成一个新的 SSH 密钥对: ssh-keygen -t rsa -b 4096 -C "[email protected]" 执行后,它会要求你指定文件名,按回车即可使用默认路径(~/.ssh/id_rsa),然后你会得到两个文件: ...
使用 ncnn 布署 pytorch 模型到 Android 手机 编译 NCNN 时要打开显卡支持 vulkan 是针对 gpu 的 -DNCNN_VULKAN=ON MobileNetV3 編譯成 MT 時要打開 CMAKE 0091 特性 cmake_minimum_required(VERSION 3.20.0) cmake_policy(SET CMP0091 NEW) set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreaded$<$<CONFIG:Debug>:Debug>") project("client-project") 训练 YOLO \Envs\torch\Scripts\activate.ps1 python train.py --batch 6 --workers 2 --imgsz 960 --epochs 300 --data "\Core\yaml\data.yaml" --cfg "\Core\yaml\cfg.yaml" --weights \Core\weights\best.pt --device 0 转换模型 from torch import nn import torch.utils.model_zoo as model_zoo import torch.onnx from libs import define from libs.net import Net from libs.dataset import ImageDataset import os test_data = ImageDataset(define.testPath,False) test_loader = torch.utils.data.DataLoader( test_data, batch_size=1, shuffle=True) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = Net(out_dim=19).to(device) model.load_state_dict(torch.load( "./widget/last.pt" )) model.eval() def saveOnnx(): for data, target in test_loader: data, target = data.to(device), target.to(device) label = target.long() y = model(data) # Export the model torch.onnx.export(model, # model being run data, # model input (or a tuple for multiple inputs) "./widget/best.onnx", # where to save the model (can be a file or file-like object) export_params=True, # store the trained parameter weights inside the model file opset_version=10, # the ONNX version to export the model to do_constant_folding=True, # whether to execute constant folding for optimization input_names = ['input'], # the model's input names output_names = ['output'], # the model's output names dynamic_axes={'input' : {0 : 'batch_size'}, # variable lenght axes 'output' : {0 : 'batch_size'}}) traced_script_module = torch.jit.trace(model, data) return saveOnnx() # 转换 os.system("python -m onnxsim ./widget/best.onnx ./widgetbest-sim.onnx") os.system("./bin/onnx2ncnn.exe ./widget/best-sim.onnx ./widget/best.param ./widget/best.bin") os.system("./bin/ncnnoptimize.exe ./widget/best.param ./widget/best.bin ./widget/best-opt.param ./widget/best-opt.bin 65536") python .\export.py --weights weights/best.pt --img 960 --batch 1 --train python -m onnxsim best.onnx best-sim.onnx .\onnx2ncnn.exe best-sim.onnx best.param best.bin ncnnoptimize best.param best.bin best-opt.param best-opt.bin 65536 Git clone ncnn repo with submodule $ git clone https://github.com/Tencent/ncnn.git $ cd ncnn $ git submodule update --init Build for Linux / NVIDIA Jetson / Raspberry Pi Build for Windows x64 using VS2017 Build for macOS Build for ARM Cortex-A family with cross-compiling Build for Hisilicon platform with cross-compiling Build for Android Build for iOS on macOS with xcode Build for WebAssembly Build for AllWinner D1 Build for Loongson 2K1000 Build for Termux on Android Build for Linux Install required build dependencies: ...
1. 不要使用清华的源,要用阿里的,因为清华的源不全 2. 要使用 python launch.py 来安装一些 git 库 3. 要安装 requirements_versions.txt 带版本的 pip 库 4. 可以使用 python webui.py –port=7860 –server=0.0.0.0 –medvram 节省显存