NVIDIA published developer guidance to squeeze larger generative AI models onto Jetson edge modules, aiming to unlock more capable robots and physical agents.
4 results for: inference
AWS launches G7e SageMaker instances with NVIDIA RTX PRO 6000 Blackwell GPUs
AWS added G7e instances to SageMaker AI using NVIDIA RTX PRO 6000 Blackwell GPUs, offering 96 GB GDDR7 per GPU and 1/2/4/8 GPU node sizes to simplify hosting large open-source FMs.
AllenAI launches vla-eval to unify Vision-Language-Action benchmarking
vla-eval decouples model inference from simulator execution with a WebSocket+msgpack protocol and Docker isolation, supporting 14 benchmarks and six model servers.
Qwen3.6-35B-A3B bests Claude Opus 4.7 on Willison's pelican test
Simon Willison reports that a local, quantized Qwen3.6-35B-A3B run produced better pelican and flamingo illustrations than Anthropic's Claude Opus 4.