In this tutorial, we will explore the integration of yolov8 aimbot computer vision (YOLO) and mouse control using an Arduino in the context of an automated bot. This setup is particularly useful for tasks such as in-game aiming assistance or robotic control that requires precise and dynamic mouse movements.
To access the training and sourcing datasets, check out this guide: Yolov8 Aimbot with ultralytics and roboflow
For github source use: https://github.com/slyautomation/yolov8
We will begin by exploring two essential pieces of the yolov8 aimbot project:
- Mouse Control with PyArduinoBot: This involves smooth and controlled movement of the mouse cursor via serial communication with an Arduino. The goal is to break down a large mouse movement into smaller incremental steps, ensuring smooth movement that can be used in FPS games or similar applications. PyArduinoBot is designed to send these precise movements to the Arduino, which in turn controls the mouse.
- Object Detection with YOLO and Custom Automation: Using a YOLO (You Only Look Once) model for real-time object detection, we can identify objects of interest on the screen, calculate their position, and move the mouse towards them. The YOLO model processes the image, identifies bounding boxes around the objects, and we use that data to determine the coordinates to which the mouse should move.
Imports:
import cv2
import keyboard
import numpy as np
import scipy
import serial
import torch
import ultralytics
from mss import mss
from ultralytics import YOLO
from ultralytics.utils.plotting import Annotator
import PyArduinoBot_v2
from PyArduinoBot_v2 import arduino_mouse
- cv2: OpenCV, used for image processing.
- keyboard: For keyboard interaction detection (e.g., keypress detection).
- numpy (np): Used for numerical operations, including handling image data.
- scipy: Used for spatial calculations like finding the nearest point.
- serial: For communicating with Arduino via serial connection.
- torch: PyTorch for GPU-accelerated computation.
- ultralytics: A package for using YOLO models for object detection.
- mss: A package for screenshot capture.
- PyArduinoBot_v2: Custom module related to Arduino control.
- YOLO: Object detection model from ultralytics.
- Annotator: For adding annotations (text, boxes) to images.
Global Variables:
PyArduinoBot_v2.FOV = 1.2
PyArduinoBot_v2.FPS = True
PyArduinoBot_v2.num_steps = 10
- Sets some parameters for
PyArduinoBot_v2
:- Field of View (FOV): Set to
1.2
. - FPS: Enabled.
- num_steps: Steps set to 10.
- Field of View (FOV): Set to
run_checks
Function:
def run_checks():
ultralytics.checks()
print("Using GPU:", torch.cuda.is_available())
- ultralytics.checks(): Performs checks on the YOLO model installation.
- torch.cuda.is_available(): Checks if GPU (CUDA) is available and prints the result.
mouse_action
Function:
def mouse_action(x, y, button):
global fov, arduino
arduino_mouse(x, y, ard=arduino, button=button, winType='FPS')
- arduino_mouse: Sends the coordinates
(x, y)
and button info to the Arduino mouse module for input simulation.
custom_predict
Function:
def custom_predict(sourc='screen', sav=True, sho=False, imgs=(800, 800), con=0.3, save_tx=False):
predictions = model.predict(source=sourc, save=sav, show=sho, imgsz=imgs, conf=con, save_txt=save_tx)
boxes_data = []
for result in predictions:
boxes = result.boxes
for box in boxes:
b = box.xyxy[0]
c = box.cls
conf = box.conf[0]
label = f"{model.names[int(c)]} {conf*100:.2f}%"
boxes_data.append((b, label))
return boxes_data
- custom_predict: Uses the YOLO model to predict objects from an image source. It extracts the bounding boxes and labels from the modelβs predictions.
Main Execution Block:
if __name__ == '__main__':
global arduino
port = 'COM5'
baudrate = 115200
arduino = serial.Serial(port=port, baudrate=baudrate, timeout=.1)
monitor = {"top": 0, "left": 0, "width": 1920, "height": 1080}
sct = mss()
mod = 'valorantv2.pt'
model = YOLO(mod)
Bot = True
while Bot:
close_points = []
img = np.array(sct.grab(monitor))
img = cv2.cvtColor(img, cv2.COLOR_BGRA2BGR)
bigger = cv2.resize(img, (800, 800))
boxes_data = custom_predict(sourc=bigger, sav=False, sho=False)
- Sets up Arduino connection:
- Port: ‘COM5’.
- Baudrate: 115200.
- Screen Capture: Uses
mss()
to capture the screen within themonitor
dimensions. - Loads YOLO Model:
YOLO(mod)
loads the model from a file (valorantv2.pt
).
Yolov8 Object Detection and Display:
for box, label in boxes_data:
box = [int(coord * 1920 / 800) if i % 2 == 0 else int(coord * 1080 / 800) for i, coord in enumerate(box)]
start_point = (box[0], box[1])
end_point = (box[2], box[3])
center_x = round((box[0] + box[2]) / 2)
height = box[3] - box[1]
center_y = round(box[1] + 0.1 * height)
img = cv2.rectangle(img, start_point, end_point, color, thickness)
img = cv2.circle(img, (center_x, center_y), radius=2, color=(0, 0, 255), thickness=-1)
img = cv2.putText(img, label, (box[0], box[1] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, thickness)
close_points.append((center_x, center_y))
- Processes Object Detections:
- Rescales the bounding box to the original screen dimensions.
- Draws a rectangle around detected objects.
- Marks the center of each box with a small circle and appends it to
close_points
.
This code is processing bounding boxes detected by the YOLO model and annotating the image with rectangles, circles, and text labels based on the objects detected. Let’s break it down step by step:
1. Rescaling the Bounding Box:
box = [int(coord * 1920 / 800) if i % 2 == 0 else int(coord * 1080 / 800) for i, coord in enumerate(box)]
- Purpose: The bounding box coordinates generated by YOLO are based on a smaller image resolution (800×800). This line rescales the bounding box to the resolution of the original image (1920×1080).
- Mechanism:
- The
enumerate(box)
function iterates over each coordinate in thebox
list. - If the coordinate is an
x
coordinate (even index), it is scaled by multiplying with the ratio1920 / 800
. - If the coordinate is a
y
coordinate (odd index), it is scaled by1080 / 800
.
- The
2. Defining the Bounding Box Corners:
start_point = (box[0], box[1])
end_point = (box[2], box[3])
- Purpose: Defines the starting point (top-left corner) and the ending point (bottom-right corner) of the bounding box on the image.
- Explanation:
box[0]
andbox[1]
are the coordinates for the top-left corner, whilebox[2]
andbox[3]
are the coordinates for the bottom-right corner.
3. Calculating the Center of the Box:
center_x = round((box[0] + box[2]) / 2)
height = box[3] - box[1]
center_y = round(box[1] + 0.1 * height)
- Purpose: Calculates the center of the bounding box, which is often used for determining the center of the object detected.
- Center X: The
center_x
is calculated by averaging thex
coordinates of the top-left and bottom-right corners. - Center Y: The
center_y
is offset by 10% of the bounding box height, placing it slightly below the top of the box. This slight adjustment to position the center more accurately towards the objects head position.
4. Drawing the Bounding Box:
img = cv2.rectangle(img, start_point, end_point, color, thickness)
- Purpose: Draws the bounding box around the detected object.
- Mechanism: The
cv2.rectangle()
function draws a rectangle on the imageimg
using thestart_point
andend_point
coordinates. Thecolor
andthickness
are specified to style the rectangle.
5. Drawing a Center Circle:
img = cv2.circle(img, (center_x, center_y), radius=2, color=(0, 0, 255), thickness=-1)
- Purpose: Draws a small circle at the calculated center of the bounding box.
- Mechanism: The
cv2.circle()
function draws a circle at(center_x, center_y)
with a radius of 2 pixels and a red color(0, 0, 255)
. The thickness-1
means the circle is filled.
6. Adding a Label to the Image:
img = cv2.putText(img, label, (box[0], box[1] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, thickness)
- Purpose: Places the class label and confidence score of the detected object just above the bounding box.
- Mechanism: The
cv2.putText()
function adds thelabel
string at the position slightly above the top-left corner of the bounding box (box[0], box[1] - 10
). It uses thecv2.FONT_HERSHEY_SIMPLEX
font, a size of0.5
, and the specifiedcolor
andthickness
.
7. Tracking Close Points:
close_points.append((center_x, center_y))
- Purpose: Appends the center coordinates of the bounding box to the
close_points
list. This list is typically used later in the script for identifying the closest detected object to a reference point, like the center of the screen.
Summary:
This code handles the yolov8 aimbot visualization of detected objects by drawing bounding boxes, marking the centers, and labeling the objects with their class and confidence scores. It also stores the centers of these boxes for potential further use in decision-making, such as moving the mouse towards the nearest object. The rescaling of the bounding boxes ensures that the coordinates match the original image size, and the use of OpenCV functions provides the drawing and text-adding functionality.
Yolov8 Aimbot Mouse Action Based on Detection:
if len(close_points) != 0:
pt = (960, 540)
try:
closest = close_points[scipy.spatial.KDTree(close_points).query(pt)[1]]
if keyboard.is_pressed("shift"):
mouse_action(closest[0], closest[1], button='left')
except:
pass
- KDTree: Finds the closest detected object to the center of the screen using spatial querying.
- Keyboard Trigger: If the “Shift” key is pressed, the mouse moves towards the closest object and triggers an action (simulating a mouse click) for the yolov8 aimbot.
Display and Loop Continuation:
cv2.imshow("images", img)
cv2.waitKey(5)
- Continuously shows the processed frame (
img
) with detected objects and waits for 5 milliseconds before continuing the loop.
yolov8 aimbot PyArduinoBot_v2
Imports and Initial Setup
Imports:
import ctypes
import math
import random
import time
import ctypes.wintypes
import serial
ctypes
andctypes.wintypes
: These are used for interacting with the Windows API, allowing for low-level system tasks like manipulating the mouse cursor.math
andrandom
: For mathematical operations and random number generation.time
: Used for delays in program execution.serial
: For communicating with an Arduino via a serial connection.
Global Variables:
num_steps = 10
FOV = 1.0
FPS = False
# FIXES SLOW TIME.SLEEP IN WINDOWS OS
timeBeginPeriod = ctypes.windll.winmm.timeBeginPeriod #new
timeBeginPeriod(1) #new
num_steps
: This controls how many steps the mouse will move in a sequence to simulate smoother movement.FOV
: Field of View scaling factor, used to adjust the mouse movement based on in-game sensitivity or other factors.FPS
: Determines whether the program should assume that the mouse should start from a fixed point, as in a first-person shooter (default is(960, 540)
).timeBeginPeriod(1)
: Sets the system timer resolution to 1ms to improve the accuracy oftime.sleep()
in Windows, making the program more responsive.
yolov8 aimbot Mouse Position and Movement Calculation
Cursor Position:
def _position():
"""Returns the current xy coordinates of the mouse cursor as a two-integer
tuple by calling the GetCursorPos() win32 function.
Returns:
(x, y) tuple of the current xy coordinates of the mouse cursor.
"""
cursor = ctypes.wintypes.POINT()
ctypes.windll.user32.GetCursorPos(ctypes.byref(cursor))
return (cursor.x, cursor.y)
_position()
: Retrieves the current mouse cursor’s position using Windows API calls viactypes
.
- Point on Line Calculation:
def getPointOnLine(x1, y1, x2, y2, n):
global FOV, num_steps, storagex, adj_storagex, storagey, adj_storagey
"""
Returns an (x, y) tuple of the point that has progressed a proportion ``n`` along the line defined by the two
``x1``, ``y1`` and ``x2``, ``y2`` coordinates.
This function was copied from pytweening module, so that it can be called even if PyTweening is not installed.
"""
print("Target x:", x2 - x1)
print("Target x FOV:", (x2 - x1) * FOV)
print(n)
x = (((x2 - x1) * (1 / (num_steps)))) * FOV
y = (((y2 - y1) * (1 / (num_steps)))) * FOV
storagex += x
storagey += y
print("x:", x)
print("Storage x:", storagex)
if x < 0:
f_x = str(math.ceil(abs(x)) * -1)
adj_storagex += math.ceil(abs(x)) * -1
else:
f_x = str(math.ceil(x))
adj_storagex += math.ceil(x)
if y < 0:
f_y = str(math.ceil(abs(y)) * -1)
adj_storagey += math.ceil(abs(y)) * -1
else:
f_y = str(math.ceil(y))
adj_storagey += math.ceil(y)
print("Adj Storage x:", adj_storagex)
print(f_x, f_y)
return (f_x + ":" + f_y)
getPointOnLine_v1()
andgetPointOnLine()
: These functions calculate incremental mouse movement based on a start point(x1, y1)
and target point(x2, y2)
. The functions calculate movement in steps based on thenum_steps
variable and apply theFOV
scaling factor.- The
storagex
andstoragey
variables accumulate the calculated steps, andadj_storagex
andadj_storagey
adjust the actual movements sent to the Arduino.
The function getPointOnLine()
calculates a point on a straight line between two given points (x1, y1)
and (x2, y2)
in a grid (typically screen coordinates). The function breaks down the movement between these two points into smaller steps, using a scaling factor based on the FOV
(Field of View) and the total number of steps num_steps
. This is often used for simulating smooth, incremental mouse movement.
Function Parameters:
x1, y1
: The starting coordinates of the movement.x2, y2
: The target coordinates of the movement.n
: The current step number (between0
andnum_steps - 1
). This represents the progress along the line from start to end.
Global Variables:
FOV
: A scaling factor that adjusts the movement based on the field of view. It affects how far the mouse moves in each step.num_steps
: Total number of steps over which the movement from(x1, y1)
to(x2, y2)
will be divided. Each step makes the movement smoother.storagex, adj_storagex, storagey, adj_storagey
: These variables accumulate the actual and adjusted values of the mouse’s movement over all steps.storagex
andstoragey
track the total distance moved, whileadj_storagex
andadj_storagey
track the integer values sent to the Arduino, ensuring any discrepancies between the stored and actual positions are corrected at the final step.
Breakdown of yolov8 aimbot Code Logic:
Print Statements:
print("Target x:", x2 - x1)
: Prints the total distance to be covered along the x-axis.print("Target x FOV:", (x2 - x1) * FOV)
: Prints the total distance to be covered along the x-axis after applying the FOV scaling factor.
Calculating Movement for Current Step:
x = (((x2 - x1) * (1 / num_steps))) * FOV
: This computes the incremental movement along the x-axis for this step. The distance(x2 - x1)
is divided bynum_steps
to get the portion of movement to apply for the current step, which is then scaled by theFOV
.y = (((y2 - y1) * (1 / num_steps))) * FOV
: Same calculation for the y-axis.
Accumulate the Movement:
storagex += x
andstoragey += y
: These variables accumulate the floating-point movement values for every step, allowing the function to keep track of the total movement along each axis.
Adjust the Movement for Arduino (Integer Conversion):
- Since the Arduino requires integer values to control the mouse, the function converts the floating-point increments into integers.
- If
x < 0
: Convert the floating-pointx
value to a negative integer usingmath.ceil(abs(x)) * -1
. - Else: Convert the floating-point
x
value to a positive integer usingmath.ceil(x)
. - This ensures the integer value sent to the Arduino accurately represents the movement while also accumulating these adjusted values in
adj_storagex
andadj_storagey
.
Handling the Final Step:
if n == num_steps - 1
: In the last step (n
equalsnum_steps - 1
), the function checks if the accumulated integer values (adj_storagex
andadj_storagey
) match the floating-point totals (storagex
andstoragey
). If thereβs any discrepancy, the function adjusts the final values sent to the Arduino to correct this error.
Return Statement:
return (f_x + ":" + f_y)
: The function returns the calculated movement as a formatted string, wheref_x
is the integer movement for the x-axis andf_y
is the integer movement for the y-axis. These values are sent to the Arduino for execution.
Purpose:
The function ensures that the mouse movement between two points is smooth and gradual. By breaking the movement into smaller steps and converting floating-point values to integers, it allows for precise control of the mouse, which is essential in gaming scenarios or automated interactions where accuracy and smoothness matter.
It also ensures that rounding errors (due to the conversion to integers) are corrected in the final step, ensuring the mouse always reaches its target position accurately.
Mouse Movement Command:
def _mouseMoveDrag(x, y, ard=None, winType=None):
global previousList, lastList, num_steps, adj_storagex, storagex, storagey, adj_storagey
adj_storagex = 0
storagex = 0
storagey = 0
adj_storagey = 0
if winType == 'FPS':
startx, starty = (960, 540)
else:
startx, starty = _position()
arduino = ard
# If the duration is small enough, just move the cursor there instantly.
steps = [(x, y)]
print('num_steps:', num_steps)
print("start:", startx, starty)
steps = [getPointOnLine(startx, starty, x, y, n) for n in range(num_steps)]
#print("Final Coords sent:", steps)
# Making sure the last position is the actual destination.
if not FPS:
steps.pop()
steps.pop(0)
steps = str(steps)
print("Final Coords sent:", steps)
arduino.write(bytes(steps, 'utf-8'))
_mouseMoveDrag()
: Handles moving the mouse from its current position to a target(x, y)
position. It usesgetPointOnLine()
to break down the movement into steps and communicates the calculated positions to the Arduino.
yolov8 aimbot Arduino Communication
This uses the same code for the arduino aimbot as:
aimbot Github Source: https://github.com/slyautomation/valorant_aimbot
For the written guide on the arduino aimbot code, guide check out: https://www.slyautomation.com/blog/valorant-aimbot-with-color-detection-with-python/
Mouse Movement with Arduino:
def arduino_mouse(x=100, y=100, ard=None, button=None, winType=None):
#
#print("arduino mouse is:", button)
#if button == None:
_mouseMoveDrag(x, y, ard=ard, winType=winType)
time_start = time.time()
stat = getLatestStatus(ard)
#print(stat)
#print(time.time() - time_start)
if button == None:
time.sleep(0.01)
else:
time.sleep(0.05)
c = random.uniform(0.02,0.05)
#time.sleep(0.05)
#print("passed arduino mouse is:", button)
if button == 'left':
ard.write(bytes(button, 'utf-8'))
stat = getLatestStatus(ard)
#print(stat)
time.sleep(c)
if button == 'right':
ard.write(bytes(button, 'utf-8'))
stat = getLatestStatus(ard)
#print(stat)
time.sleep(c)
arduino_mouse()
: Sends the movement or click commands to the Arduino via a serial connection. It moves the mouse or simulates clicks (left or right) based on the providedx
,y
coordinates and button input (left
orright
). It waits for a response from the Arduino and introduces slight random delays to simulate natural mouse behavior.
Arduino Response Handling:
def getLatestStatus(ard=None):
status = 'Nothing'
while ard.inWaiting() > 0:
status = ard.readline()
return status
getLatestStatus()
: Reads any response from the Arduino to check the status of the last command.
yolov8 aimbot Key Functionality Summary:
- Mouse movement is simulated by calculating a series of small steps between the current position and the target, then sending these coordinates to the Arduino.
- Arduino-controlled mouse actions: The Arduino receives these commands and moves the actual mouse hardware, optionally performing clicks.
- FPS Mode: When enabled, the mouse always starts from a fixed center point, which is typical in first-person shooter games.
yolov8 aimbot Conclusion
In this tutorial, we explored the powerful combination of real-time object detection using yolov8 aimbot; YOLO and precise mouse control via PyArduinoBot. By leveraging these technologies, we can automate mouse movements with accuracy, making it useful for applications such as robotic control, screen interaction, or even gaming.
However, itβs crucial to emphasize the importance of fair play and adhering to the terms of service of any game or software. Automation tools, while powerful, should be used responsibly. In the gaming world, using automation to gain an unfair advantage can result in penalties, bans, and harm to the integrity of the community. The technologies discussed here have legitimate uses beyond gaming, such as in robotics, assistive technologies, and automation, but always be mindful of ethical considerations.
As developers and creators, we should strive to use these tools to innovate and solve problems while respecting the boundaries of fair competition and digital ethics.
Need a game to play with an aimbot? check out this: Krunker Aimbot with Yolov8 and Roboflow