Patents by Inventor Hongda Wang
Hongda Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240134114Abstract: A dispersion-compensation microstructure fiber uses pure silica glass as the background material. It includes the core, the first-type defects, the second-type defects and the cladding. The air holes in the fiber cross section are arranged in the equilateral triangle lattice with the same adjacent air-hole to air-hole spacing. The core is formed by omitting 1 air hole. The first-type defects are formed by the 6 air holes locating at the vertices of hexagonal third-layer porous structure surrounding the core and their surrounding background material. The second-type defects are formed by the air holes in the first air-hole layer surrounding each first-type defect and their surrounding background material. The second-type defects act as the porous structure to surround the first-type defects and the fundamental defect modes, and can also combine with the first-type defects to act as the core of the second-order defect modes.Type: ApplicationFiled: December 22, 2023Publication date: April 25, 2024Applicant: YANSHAN UNIVERSITYInventors: Wei WANG, Chang ZHAO, Xiaochen KANG, Hongda YANG, Wenchao LI, Zheng LI, Lin SHI
-
Publication number: 20240135544Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: ApplicationFiled: December 18, 2023Publication date: April 25, 2024Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Publication number: 20240078976Abstract: Disclosed is a pixel circuit arranged in a display substrate, which comprises a first driving mode and a second driving mode. Content displayed in the display substrate comprises multiple display frames. In the first driving mode and the second driving mode, the display frames comprise refresh frames. A signal of a second scanning line is the same as that of a third scanning line. The time of which the signal of the second scanning line is an active level signal comprises a first refresh time period, a second refresh time period and a third refresh time period, which sequentially occur at intervals. During the second refresh time period, a signal of a first scanning line is an inactive level signal. The voltage of a signal at a reset voltage end is a positive voltage, and the voltage of a signal at a first initial voltage end is a negative voltage.Type: ApplicationFiled: July 29, 2022Publication date: March 7, 2024Inventors: Tianyi CHENG, Haigang QING, Hongda CUI, Sifei AI, Guowei ZHAO, Yang YU, Li WANG, Baoyun WU
-
Publication number: 20240073028Abstract: The present disclosure provides an anti-counterfeiting verifying method, a hardware apparatus, a system, an electronic device and a storage medium, which aim at improving anti-counterfeiting effectiveness for the electronic products, the method includes: executing a step of generating to-be-verified information of the first device in response to a triggered verification event; outputting the to-be-verified information to indicate a second device to send the to-be-verified information to a verifying terminal, the verifying terminal being configured to for verifying authenticity of the first device according to the to-be-verified information, and feeding back a verification result to the first device and/or the second device for displaying, wherein the step of generating the to-be-verified information of the first device includes: obtaining a device identifier of the first device and a private key pre-stored in the first device.Type: ApplicationFiled: August 30, 2022Publication date: February 29, 2024Applicant: BOE TECHNOLOGY GROUP CO., LTD.Inventors: Hongjun Du, Tao Li, Xingxing Zhao, Huailiang Wang, Hongda Yu
-
Patent number: 11893739Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: GrantFiled: March 29, 2019Date of Patent: February 6, 2024Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Publication number: 20230360230Abstract: A computer-implemented method for tracking multiple targets includes identifying a primary target from a plurality of targets based on a plurality of images obtained from an imaging device carried by an aerial vehicle via a carrier, determining a target group including one or more targets from the plurality of targets, where the primary target is always in the target group. Determining the target group includes determining one or more remaining targets in the target group based on a spatial relationship or a relative distance between the primary target and each target of the plurality of targets other than the primary target. The method further includes controlling at least one of the aerial vehicle or the carrier to track the target group as a whole.Type: ApplicationFiled: July 17, 2023Publication date: November 9, 2023Inventors: Jie QIAN, Hongda WANG, Qifeng WU
-
Patent number: 11704812Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.Type: GrantFiled: December 19, 2019Date of Patent: July 18, 2023Assignee: SZ DJI TECHNOLOGY CO., LTD.Inventors: Jie Qian, Hongda Wang, Qifeng Wu
-
Publication number: 20230060037Abstract: A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.Type: ApplicationFiled: January 27, 2021Publication date: February 23, 2023Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Hatice Ceylan Koydemir, Yunzhe Qiu
-
Publication number: 20230030424Abstract: A deep learning-based digital/virtual staining method and system enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples. In one embodiment, the method of generates digitally/virtually-stained microscope images of label-free or unstained samples using fluorescence lifetime (FLIM) image(s) of the sample(s) using a fluorescence microscope. In another embodiment, a digital/virtual autofocusing method is provided that uses machine learning to generate a microscope image with improved focus using a trained, deep neural network. In another embodiment, a trained deep neural network generates digitally/virtually stained microscopic images of a label-free or unstained sample obtained with a microscope having multiple different stains. The multiple stains in the output image or sub-regions thereof are substantially equivalent to the corresponding microscopic images or image sub-regions of the same sample that has been histochemically stained.Type: ApplicationFiled: December 22, 2020Publication date: February 2, 2023Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Yilin Luo, Kevin de Haan, Yijie Zhang, Bijie Bai
-
Publication number: 20220114711Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: ApplicationFiled: November 19, 2021Publication date: April 14, 2022Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Patent number: 11222415Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: GrantFiled: April 26, 2019Date of Patent: January 11, 2022Assignee: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Publication number: 20210043331Abstract: A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample.Type: ApplicationFiled: March 29, 2019Publication date: February 11, 2021Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Zhensong Wei
-
Publication number: 20210011490Abstract: A flight control method includes determining a distance of a target relative to an aircraft based on a depth map acquired by an imaging device carried by the aircraft, determining an orientation of the target relative to the aircraft, and controlling flight of the aircraft based on the distance and the orientation.Type: ApplicationFiled: July 21, 2020Publication date: January 14, 2021Inventors: Jie QIAN, Qifeng WU, Hongda WANG
-
Publication number: 20200126239Abstract: A computer-implemented method for tracking multiple targets includes identifying a plurality of targets based on a plurality of images obtained from an imaging device carried by an unmanned aerial vehicle (UAV) via a carrier, determining a target group comprising one or more targets from the plurality of targets, and controlling at least one of the UAV or the carrier to track the target group.Type: ApplicationFiled: December 19, 2019Publication date: April 23, 2020Inventors: Jie QIAN, Hongda WANG, Qifeng WU
-
Publication number: 20190333199Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.Type: ApplicationFiled: April 26, 2019Publication date: October 31, 2019Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIAInventors: Aydogan Ozcan, Yair Rivenson, Hongda Wang, Harun Gunaydin, Kevin de Haan
-
Patent number: 7745206Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.Type: GrantFiled: January 29, 2008Date of Patent: June 29, 2010Assignee: Arizona State UniversityInventors: Hongda Wang, Stuart Lindsay
-
Publication number: 20080209989Abstract: An atomic force microscope and a method for detecting interactions between a probe and two or more sensed agents on a scanned surface and determining the relative location of two or more sensed agents is provided. The microscope has a scanning probe with a tip that is sensitive to two or more sensed agents on said scanned surface; two or more sensing agents tethered to the tip of the probe; and a device for recording the displacement of said probe tip as a function of time, topographic images, and the spatial location of interactions between said probe and the two or more sensed agents on said surface.Type: ApplicationFiled: January 29, 2008Publication date: September 4, 2008Inventors: Hongda Wang, Stuart Lindsay