如何执行一拖(总部设在X,Y鼠标坐标),在Android上使用AccessibilityService?

BrowJr:

我想知道如何执行android上一拖总部设在X,Y鼠标坐标?考虑为两个简单的例子,该团队浏览器/ QuickSupport借鉴远程智能手机的“密码模式”和Windows的笔分别涂料。

在这里输入图像描述

在这里输入图像描述

所有我能够补充的是模拟触摸(与dispatchGesture()和也AccessibilityNodeInfo.ACTION_CLICK)。

我发现这些相关者的联系,但不知道他们是否是有用的:

下面是使用鼠标坐标(内送我的工作代码PictureBox控制),远程电话和模拟触摸。

Windows窗体应用程序:

private void pictureBox1_MouseDown(object sender, MouseEventArgs e)
{
    foreach (ListViewItem item in lvConnections.SelectedItems)
    {
        // Remote screen resolution
        string[] tokens = item.SubItems[5].Text.Split('x'); // Ex: 1080x1920

        int xClick = (e.X * int.Parse(tokens[0].ToString())) / (pictureBox1.Size.Width);
        int yClick = (e.Y * int.Parse(tokens[1].ToString())) / (pictureBox1.Size.Height);

        Client client = (Client)item.Tag;

        if (e.Button == MouseButtons.Left)
            client.sock.Send(Encoding.UTF8.GetBytes("TOUCH" + xClick + "<|>" + yClick + Environment.NewLine));
    }
}

编辑:

My last attempt was a "swipe screen" using mouse coordinates (C# Windows Forms Application) and a custom android routine (with reference to code of "swipe screen" linked above), respectively:

private Point mdownPoint = new Point();

private void pictureBox1_MouseDown(object sender, MouseEventArgs e)
{
    foreach (ListViewItem item in lvConnections.SelectedItems)
    {
        // Remote screen resolution
        string[] tokens = item.SubItems[5].Text.Split('x'); // Ex: 1080x1920

        Client client = (Client)item.Tag;

        if (e.Button == MouseButtons.Left)
        {
            xClick = (e.X * int.Parse(tokens[0].ToString())) / (pictureBox1.Size.Width); 
            yClick = (e.Y * int.Parse(tokens[1].ToString())) / (pictureBox1.Size.Height);

            // Saving start position:

            mdownPoint.X = xClick; 
            mdownPoint.Y = yClick; 

            client.sock.Send(Encoding.UTF8.GetBytes("TOUCH" + xClick + "<|>" + yClick + Environment.NewLine));
        }
    }
}

private void PictureBox1_MouseMove(object sender, MouseEventArgs e)
{
    foreach (ListViewItem item in lvConnections.SelectedItems)
    {
        // Remote screen resolution
        string[] tokens = item.SubItems[5].Text.Split('x'); // Ex: 1080x1920

        Client client = (Client)item.Tag;

        if (e.Button == MouseButtons.Left)
        {
            xClick = (e.X * int.Parse(tokens[0].ToString())) / (pictureBox1.Size.Width);
            yClick = (e.Y * int.Parse(tokens[1].ToString())) / (pictureBox1.Size.Height);

            client.sock.Send(Encoding.UTF8.GetBytes("MOUSESWIPESCREEN" + mdownPoint.X + "<|>" + mdownPoint.Y + "<|>" + xClick + "<|>" + yClick + Environment.NewLine));
        }
    }
}

android AccessibilityService:

public void Swipe(int x1, int y1, int x2, int y2, int time) {

if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.N) {
    System.out.println(" ======= Swipe =======");

    GestureDescription.Builder gestureBuilder = new GestureDescription.Builder();
    Path path = new Path();
    path.moveTo(x1, y1);
    path.lineTo(x2, y2);

    gestureBuilder.addStroke(new GestureDescription.StrokeDescription(path, 100, time));
    dispatchGesture(gestureBuilder.build(), new GestureResultCallback() {
        @Override
        public void onCompleted(GestureDescription gestureDescription) {
            System.out.println("SWIPE Gesture Completed :D");
            super.onCompleted(gestureDescription);
        }
    }, null);
}

}

that produces the following result (but still not is able to draw "pattern password" like TeamViewer for example). But like said on comment below, I think that with a similar approach this can be achieved using Continued gestures probably. Any suggestions in this direction will be welcome.

在这里输入图像描述

在这里输入图像描述


Edit 2:

Definitely, the solution is continued gestures like said on previous Edit.

And below is a supposed fixed code that i found here =>

android AccessibilityService:

// Simulates an L-shaped drag path: 200 pixels right, then 200 pixels down.
Path path = new Path();
path.moveTo(200,200);
path.lineTo(400,200);

final GestureDescription.StrokeDescription sd = new GestureDescription.StrokeDescription(path, 0, 500, true);

// The starting point of the second path must match
// the ending point of the first path.
Path path2 = new Path();
path2.moveTo(400,200);
path2.lineTo(400,400);

final GestureDescription.StrokeDescription sd2 = sd.continueStroke(path2, 0, 500, false); // 0.5 second

HongBaoService.mService.dispatchGesture(new GestureDescription.Builder().addStroke(sd).build(), new AccessibilityService.GestureResultCallback(){

@Override
public void onCompleted(GestureDescription gestureDescription){
super.onCompleted(gestureDescription);
HongBaoService.mService.dispatchGesture(new GestureDescription.Builder().addStroke(sd2).build(),null,null);
}

@Override
public void onCancelled(GestureDescription gestureDescription){
super.onCancelled(gestureDescription);
}
},null);

Then, my doubt is: how send correctly mouse coordinates for code above, of the way that can perform drag to any direction? Some idea?


Edit 3:

I found two routines that are used to perform drag, but they are using UiAutomation + injectInputEvent(). AFAIK, injection of event works only in a system app like said here and here and i not want it.

This are routines found:

Then to achieve my goal, i think that 2rd routine is more appropriated to use (following the logic, excluding event injection) with code showed on Edit 2 and send all points of pictureBox1_MouseDown and pictureBox1_MouseMove (C# Windows Forms Application) respectively to fill Point[] dynamically and on pictureBox1_MouseUp send cmd to execute the routine and use this array filled. If you have a idea to 1st routine, let me know :D.

If after read this Edit you have a possible solution, show me in a answer please, while i will try and test this idea.

BrowJr :

Here is a example of a solution based on Edit 3 of question.


C# Windows Froms Application "formMain.cs":

using System.Net.Sockets;

private List<Point> lstPoints;

private void pictureBox1_MouseDown(object sender, MouseEventArgs e) 
{
    if (e.Button == MouseButtons.Left)
    {
        lstPoints = new List<Point>();
        lstPoints.Add(new Point(e.X, e.Y));
    }
}

private void PictureBox1_MouseMove(object sender, MouseEventArgs e)
{
    if (e.Button == MouseButtons.Left)
    {
        lstPoints.Add(new Point(e.X, e.Y));
    }
}

private void PictureBox1_MouseUp(object sender, MouseEventArgs e)
{
    lstPoints.Add(new Point(e.X, e.Y));

    StringBuilder sb = new StringBuilder();

    foreach (Point obj in lstPoints)
    {
        sb.Append(Convert.ToString(obj) + ":");
    }

    serverSocket.Send("MDRAWEVENT" + sb.ToString() + Environment.NewLine);
}

机器人服务“ SocketBackground.java ”:

import java.net.Socket;

String xline;

while (clientSocket.isConnected()) {

    BufferedReader xreader = new BufferedReader(new InputStreamReader(clientSocket.getInputStream(), StandardCharsets.UTF_8));

    if (xreader.ready()) {

        while ((xline = xreader.readLine()) != null) {
                xline = xline.trim();

            if (xline != null && !xline.trim().isEmpty()) {

                if (xline.contains("MDRAWEVENT")) {

                    String coordinates = xline.replace("MDRAWEVENT", "");
                    String[] tokens = coordinates.split(Pattern.quote(":"));
                    Point[] moviments = new Point[tokens.length];

                    for (int i = 0; i < tokens.length; i++) {
                       String[] coordinates = tokens[i].replace("{", "").replace("}", "").split(",");

                       int x = Integer.parseInt(coordinates[0].split("=")[1]);
                       int y = Integer.parseInt(coordinates[1].split("=")[1]);

                       moviments[i] = new Point(x, y);
                    }

                    MyAccessibilityService.instance.mouseDraw(moviments, 2000);
                }
            }
        }
    }
}

机器人AccessibilityServiceMyAccessibilityService.java ”:

public void mouseDraw(Point[] segments, int time) {
    if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {

        Path path = new Path();
        path.moveTo(segments[0].x, segments[0].y);

        for (int i = 1; i < segments.length; i++) {

            path.lineTo(segments[i].x, segments[i].y);

            GestureDescription.StrokeDescription sd = new GestureDescription.StrokeDescription(path, 0, time);

            dispatchGesture(new GestureDescription.Builder().addStroke(sd).build(), new AccessibilityService.GestureResultCallback() {

                @Override
                public void onCompleted(GestureDescription gestureDescription) {
                    super.onCompleted(gestureDescription);
                }

                @Override
                public void onCancelled(GestureDescription gestureDescription) {
                    super.onCancelled(gestureDescription);
                }
            }, null);
        }
    }
}

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章

JavaFX 如何使用鼠标光标从图表上的绘制点获取 (x, y) 坐标?

使用AccessibilityService在屏幕上执行滑动

如何找到更接近鼠标 x 或鼠标 y 坐标的 QGraphicsItem(在 QList 中)?

如何使用opencv将两个鼠标坐标[[x0,y0),(x1,y1)]导出到python中的txt文件中

如何获得在画布上移动的图像的X和Y坐标

如何根据屏幕上的X,Y坐标返回颜色?

如何使用两个点的x和y坐标画一条线?

如何通过查看鼠标光标找到基于圆边的x,y坐标

如何在X,Y坐标上设置鼠标光标,单击鼠标左键并向左,向右,顶部,底部滚动

如何从x,y坐标转换为唯一的RGB值?

如何转换 x,y 坐标以使用绝对显示?

如何使用点击的X和Y坐标移动图像

如何通过在JavaScript中使用x,y坐标模拟点击?

如何使用 x/y/r 像素坐标裁剪图像?

如何使用 X 和 y 坐标设置 textView 的位置

Photoshop中鼠标位置的x和y坐标

Javascript放大/缩小到鼠标的x / y坐标

如何在一组坐标中找到最大的x,y坐标?

如何使用移相器找到鼠标位置x / y

如何配置Android AccessibilityService

如何使用线A的相应x坐标在B线上找到y坐标

需要帮助理解一行代码如何使用最大 y 坐标在 python 中找到相应的 X 值

量角器-是否可以使用x,y坐标来执行单击/拖动事件

Prolog如何在N x N网格上查找格式为“ X / Y”的所有坐标

你如何使用坐标在地图android上找到位置

鼠标X,Y坐标以及该坐标处的持续时间

如何获得在画布元素上单击鼠标的坐标?

当鼠标使用 Graphics2D 移动时,如何显示 X 和 Y 鼠标位置?

如何通過在pyglet窗口中拖動鼠標將.png從x,y拖動到x1,y1