专利摘要:
the present invention discloses a method of photographing, where the method is applied to a photographing terminal, and the photographing terminal includes a first camera, a second camera and a third camera; the first camera and the third camera are color cameras, and the second camera is a black and white camera, the resolution of the second camera is higher than the resolution of the first camera and higher than the resolution of the third camera, and the first camera, the second camera and third camera are all cameras that use primary lenses; an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera; and the method includes: obtaining a target zoom ratio; determine at least one camera from the first camera, the second camera and the third camera based on the target zoom ratio as the target camera; capture, using the target camera, at least one image that includes a target scene; and obtain an output image of the target scene based on at least one image that includes the target scene. the target scene is a scene that a user expects to photograph. according to the present invention, a lossless zoom effect of approximately 5x is obtained.
公开号:BR112020015673A2
申请号:R112020015673-6
申请日:2019-03-11
公开日:2020-12-08
发明作者:Yinting Wang;Xi Zhang;Yifan Zhang;Jinwei Chen;Haidong GAO;Changqi HU;Ruihua Li
申请人:Huawei Technologies Co., Ltd.;
IPC主号:
专利说明:

[001] [001] The present invention concerns the field of terminal technologies, and in particular, a method, apparatus and device for photographing. FUNDAMENTALS
[002] [002] Zoom is one of the most common ways to take pictures. Sometimes, a user needs to photograph a close-up of an object at a distance, such as a statue at a distance and a person three to five meters away, and sometimes, the user wants to adjust, using the zoom, an image trace to shoot. For example, when a photo is taken with a mobile phone, the zoom most commonly used by a mobile phone user is 2x to 5x zoom.
[003] [003] Ways to obtain the zoom include optical zoom (optical zoom), digital zoom (digital zoom) and the like. While both optical zoom and digital zoom help to magnify an object at a distance during photography, only optical zoom can make a larger and relatively cleaner image body by adding more pixels after body imaging. Such a zoom that is similar to optical zoom and that can not only enlarge a body area, but can also guarantee definition of an image is referred to as lossless zoom.
[004] [004] A photography terminal usually uses a main lens or a zoom lens, and a major difference between them lies in whether optical zoom can be performed. The main lens cannot perform optical zoom. A central scene can be enlarged only by approaching the central scene or performing digital zoom using an image interpolation algorithm. However, the zoom lens can perform optical zoom, and to zoom in on an object from a distance, you only need to adjust a corresponding zoom ratio of the zoom lens within that optical zoom range, to ensure that the object is zoomed in without loss. of details. The zoom lens can magnify the object from a distance by adjusting a focal length of the lens, so that a user can clearly see details of the object from a distance.
[005] [005] However, the zoom lens is usually relatively large and thick and is usually found in a digital camera. Using such a zoom lens directly, especially a zoom lens with a high zoom ratio (greater than 3x), for a portable terminal device (for example, a low profile mobile phone) the user's search for a terminal device is contradictory low profile laptop. Therefore, it is common practice to use digital zoom technology to magnify the object from a distance. However, this technology limits improvement in the resolution power and definition of image generation. When the zoom ratio is relatively high, a loss of image definition occurs.
[006] [006] Therefore, there is an urgent need for a technical solution that can enable imaging of the terminal device to obtain greater resolution power and greater definition while ensuring a low profile feature of the terminal device. SUMMARY
[007] [007] Modalities of the present invention provide a method, apparatus and device for photographing, to obtain lossless image generation with a high zoom ratio, thereby improving a user's photographing experience.
[008] [008] Specific technical solutions provided in the embodiments of the present invention are as follows.
[009] [009] According to a first aspect, an embodiment of the present invention provides a method of photographing, where the method is applied to a photographing terminal, and the photographing terminal includes a first camera, a second camera and a third camera; the first camera and the third camera are color cameras, the second camera is a black and white camera, and the first camera, the second camera and the third camera are all cameras that use primary lenses; an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera; and the method specifically includes: obtaining a target zoom ratio; determine at least one camera from the first camera, the second camera and the third camera based on the target zoom ratio as a target camera; capture, using the target camera, at least one image that includes a target scene; and obtaining an output image of at least one target scene based on at least one image that includes the target scene.
[0010] [0010] The target scene is an area that a user ultimately expects to photograph, and can also be understood as a visualization image at the target zoom ratio in a terminal; therefore, there is a correspondence between the target zoom ratio and the target scene.
[0011] [0011] According to a second aspect, an embodiment of the present invention provides a photographing apparatus, where the apparatus is applied to a photographing terminal, and the photographing terminal includes a first camera, a second camera and a third camera; the first camera and the third camera are color cameras, the second camera is a black and white camera, and the first camera, the second camera and the third camera are all cameras that use primary lenses; an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera; and the apparatus also includes: a acquisition module, configured to obtain a target zoom ratio; a determination module, configured to determine at least one camera from the first camera, the second camera and the third camera based on the target zoom ratio as a target camera; a capture module, configured to capture, using the target camera, at least one image that includes a target scene; and an image processing module, configured to obtain an output image of the target scene based on at least one image that includes the target scene.
[0012] [0012] According to the technical solutions of the preceding method and apparatus provided in the modalities of the present invention, the photographing terminal uses a combination of a plurality of cameras that use main lenses instead of a large volume zoom device, so that the thickness of the terminal is not significantly increased, thus achieving a lossless zoom effect of approximately 5x. This ensures that the aesthetics of the terminal, especially for a smart handheld device such as a mobile phone, meet the user requirements for a small low profile terminal and lossless image generation at a large zoom, and improve the user experience of the user.
[0013] [0013] According to the first aspect or The second aspect, in a possible project, the resolution of the second camera is higher than the resolution of the first camera and higher than the resolution of the third camera.
[0014] [0014] According to the first aspect or The second aspect, in a possible project, the resolution of the second camera is greater than the resolution of an output image from the first camera and greater than the resolution of an output image from the third camera .
[0015] [0015] According to the first aspect Or The second aspect, in a possible project, the method includes: when the target zoom ratio is within (1, 3), determine the first camera and the second camera as target cameras; and capture, respectively, using the first camera and the second camera, images that include the target scene. The method can be collaboratively carried out by the acquisition module, the determination module and the capture module.
[0016] [0016] Compared to the third camera, the first camera and the second camera are cameras that use short focus lenses. According to a requirement of a low target zoom ratio, the first camera and the second camera are used to respectively capture a color image and a black and white image, and the generation of a clear image at a low target zoom ratio can be obtained by subsequent use of central cropping methods, multi-frame zoom and fusion of black and white and color. These algorithms can be performed by the image processing module.
[0017] [0017] According to the first aspect or The second aspect, in a possible project, the method includes: when the target zoom ratio is within [3, 5], further determine whether the illuminance of the target scene is less than one predefined limit; and if the illuminance of the target scene is less than the predefined limit, determine the second camera and the third camera as target cameras, and respectively capture, using the second camera and the third camera, images that include the target scene; or if the illuminance of the target scene is not less than the predefined limit, determine the third camera as the target camera, and capture, using the third camera, at least one image that includes the target scene. The method can be collaboratively carried out by the obtaining module, the The determination module and the capture module.
[0018] [0018] Compared to the first camera and the second camera, the third camera is a camera that uses a telephoto lens. According to a requirement for a moderate target zoom ratio, the third camera is used to capture a color image. If the target scene has sufficient light, that is, the target scene is in a non-dark environment, lossless zoom can be approximately achieved using central crop and multi-frame zoom methods. If the target scene has insufficient light, that is, the target scene is in a dark environment, the second camera needs to be activated to capture a black image to supplement a detail for the color image captured by the third camera, and lossless zoom can be approximately obtained using central cropping methods, multi-frame zoom and telephoto fusion, black and white, and color, so that generation of a clear image at a moderate target zoom ratio can be achieved. These algorithms can be performed by the image processing module.
[0019] [0019] According to the first aspect or The second aspect, in a possible project, the method includes: when the target zoom ratio is within (5, 10], determine if the illuminance of the target scene is less than a limit and if the illuminance of the target scene is less than the predefined limit, determine the second camera and the third camera as the target cameras, and respectively capture, using the second camera and the third camera, images that include the target scene; or if the illuminance of the target scene is not less than the predefined limit, determine the third camera as the target camera, and capture, using the third camera, at least one image that includes the target scene. The method can be collaboratively carried out by the module the determination module and the capture module.
[0020] [0020] Compared to the first camera and the second camera, the third camera is a camera that uses a telephoto lens. According to a requirement for a high target zoom ratio, the third camera is used to capture a color image. If the target scene has sufficient light, that is, the target scene is in a non-dark environment, lossless zoom can be approximately achieved using central cropping methods, multi-frame zoom and digital zoom. If the target scene has insufficient light, that is, the target scene is in a dark environment, the second camera still needs to be activated to capture a black image to supplement a detail for the color image captured by the third camera, and a lossless zoom it can be approximately obtained using central cropping methods, multi-frame zoom, digital zoom and telephoto fusion, black and white, and color, so that generation of a clear image at a high target zoom ratio can be achieved. These algorithms can be performed by the image processing module.
[0021] [0021] According to the first aspect or The second aspect, in a possible design, the equivalent focal length of the third camera is 3 times the equivalent focal length of the second camera and the equivalent focal length of the second camera is equal to the equivalent focal length of the first camera.
[0022] [0022] According to the first aspect or The second aspect, in a possible design, the equivalent focal length of the first camera is 27 mm, the equivalent focal length of the second camera is 27 mm and the equivalent focal length of the third camera is 80 mm. In other words, the equivalent focal length of the third camera is approximately 3 times the equivalent focal length of the first / second camera.
[0023] [0023] According to the first aspect or The second aspect, in a possible project, the resolution of the first camera, the resolution of the second camera and the resolution of the third camera are, respectively, 10 M, 20 MeloM.
[0024] [0024] It should be understood that different terminals can be determined based on different user zoom requirements, and these terminals can have lenses with different features and provide a combination of lenses, an image processing algorithm and the like in different conditions. zoom. The fact that 3x and 5x are used as demarcation points in the preceding description is an implementation of this. More broadly, the target zoom ratio in the present invention can cover three ranges: a low range, a moderate range and a high range. For ease of description, the three ranges are represented as (1, a), [a, b] and (b, c]. As cameras using short focus lenses (for example, the equivalent focal length is 27 mm), the first camera and the second camera have a powerful short-focus imaging capability, however, as the target zoom ratio increases, the definition of an output image that is obtained by processing at least one image captured by first camera and second camera decreases, where processing algorithms include multi-frame zoom and black and white and color fusion, therefore, according to the definition constraint, a has an upper limit value, and a specific upper limit value is related to a lens parameter, an algorithm and a user requirement for the definition The specific upper limit value is not listed and is not limited in this document.
[0025] [0025] In addition, if the user allows a limited loss of definition of a zoom image, or a terminal device is allowed, due to the progress of an image processing algorithm, to use a longer telephoto lens focal length (for example, a 4x telephoto lens or a 5x telephoto lens; to be specific, the equivalent focal length is 108 mm or 135 mm), in the possible designs mentioned above, the target zoom ratio range, the lens and the way of combining lenses can all be adaptively adjusted based on the preceding theory, thereby obtaining an image that satisfies the user's requirement. For example, the equivalent focal length of the third camera can be greater than 80 mm. These possible designs must all fall within the scope of protection of the present invention.
[0026] [0026] In addition, if the user allows a limited increase in noise or details of a zoom image in a light-sensitive condition, or a terminal device is allowed, due to the progress of an image processing algorithm, use a telephoto lens with a longer focal length (for example, a 41x telephoto lens or a 5x telephoto lens; to be specific, the equivalent focal length is 108 mm or 135 mm), in the possible designs mentioned above, the range of target zoom ratio, lens parameter and lens matching mode can all be adaptively adjusted based on the preceding theory, thereby obtaining an image that meets the user's requirement. For example, the equivalent focal length of the second camera can be greater than 27 mm. These possible designs must all fall within the scope of protection of the present invention.
[0027] [0027] In addition, if the user allows a limited loss of definition of a zoom image, or due to the progress of an image processing algorithm, a value of b can be greater than 5, for example, it can reach another value, such as 5.5x or 6x.
[0028] [0028] More specifically, the possible preceding projects can be implemented by a processor by invoking a program and an instruction in a memory to perform corresponding operations, For example, activate a camera, control the camera to capture an image, perform processing algorithms in a captured image, and generate and store a final output image.
[0029] [0029] According to a third aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a memory, a processor, a bus, a first camera, a second camera and a third camera; the memory, the first camera, the second camera and the third camera, and the processor are connected using the bus; the first camera and the third camera are color cameras, the second camera is a black and white camera, and the first camera, the second camera and the third camera are all cameras that use primary lenses; and an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera. The camera is configured to capture an image signal under the control of the processor. The memory is configured to store a computer program and an instruction. The processor is configured to invoke the computer program and the instruction stored in memory, so that the terminal device performs the method according to any of the possible projects mentioned above.
[0030] [0030] According to the third aspect, in a possible project, the terminal device also includes an antenna system, and the antenna system transmits and receives, under the control of the processor, a wireless communication signal to implement wireless communication with a mobile communications network; and the mobile communications network includes one or more of the following: a GSM network, a CDMA network, a 3G network, a 4G network, a 5G network, an FDMA network, a TDMA network, a PDC network, a TACS network, a AMPS network, WCDMA network, TDSCDMA network, Wi-Fi network and LTE network.
[0031] [0031] The preceding method, apparatus and device can be applied not only to a scenario in which a photographing software provided in a terminal performs photography, but also to a scenario in which a terminal performs third-party photography software to perform photography.
[0032] [0032] In accordance with the present invention, a lossless zoom effect of approximately 5x can be obtained on a smartphone, and the relatively good balance between resolution power and noise can be achieved even in a dark environment. BRIEF DESCRIPTION OF THE DRAWINGS
[0033] [0033] Figure 1 is a schematic structural diagram of a terminal according to an embodiment of the present invention; Figure 2 is a flow chart of a method of photographing according to an embodiment of the present invention; Figure 3 shows a specific way of designing cameras according to an embodiment of the present invention; Figure 4 shows a way of photographing an optional first case according to an embodiment of the present invention; Figure 5 shows a process of changing a target scene from a really captured image to an output image according to an embodiment of the present invention; Figure 6 shows a way of photographing an optional second case according to an embodiment of the present invention; Figure 7 shows a way of photographing an optional third case according to an embodiment of the present invention; Figure 8 shows a way of photographing an optional fourth case according to an embodiment of the present invention;
[0034] [0034] In the following the technical solutions in the modalities of the present invention are clearly and completely described with reference to the drawings attached to the modalities of the present invention. Apparently, the described modalities are merely some, but not all, of the present invention. All other modalities obtained by a person skilled in the art based on the modalities of the present invention without creative efforts should fall within the scope of protection of the present invention.
[0035] [0035] In the embodiments of the present invention, a terminal can be a device that provides a user with photography and / or data connectivity, a manual device with a wireless connection function, or another processing device connected to a wireless modem. cord, for example, a digital camera, a single-lens reflex camera, a mobile phone (or referred to as a cell phone), or a smartphone, or it can be a portable, pocket, handheld, or wearable device (for example, a smart watch), a tablet, a personal computer (PC, Personal Computer), a personal digital assistant (Personal Digital Assistant, PDA), a point of sales (Point of Sales, POS), a vehicle-mounted computer, a drone, an aerial camera, or similar.
[0036] [0036] Figure 1 is an optional schematic diagram of a hardware structure of a terminal 100.
[0037] [0037] As shown in Figure l1, terminal 100 may include components such as a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160, a speaker 161, microphone 162, processor 170, external interface 180 and power supply 190. In this embodiment of the present invention, there are at least three cameras 150.
[0038] [0038] The camera 150 is configured to capture an image or a video, and can be triggered using an application instruction to implement a function of photographing an image or video. The camera can include components such as an imaging lens, a light filter and an image sensor. The light emitted or reflected by an object enters the imaging lens, passes through the light filter and eventually converges on the image sensor. The imaging lens is manually configured to perform Ee convergence and imaging under light emitted or reflected by all objects (which can also be referred to as objects to be photographed) in a photography field of view. The light filter is mainly configured to filter out an unnecessary light wave (for example, a light wave except visible light such as infrared light) in the light. The image sensor is primarily configured to perform optical to electrical conversion on an received optical signal to convert the optical signal to an electrical signal and insert the electrical signal into processor 170 for further processing.
[0039] [0039] A person skilled in the art can understand that Figure 1 shows merely an example of a portable multifunctional device and is not a limitation on the portable multifunctional device. The portable multifunctional device can include more or less components than those shown in the figure, or some components can be combined, or different components can be used.
[0040] [0040] Input unit 130 can be configured to receive an input numeral or input character information, and generates a key signal input related to a user configuration and function control of the handheld multifunction device. Specifically, input unit 130 may include a touch screen 131 and another input device
[0041] [0041] The display unit 140 can be configured to display input of information by the user or information provided to the user and various menus of terminal 100. In this embodiment of the present invention, the display unit is further configured to display an image obtained by a device using the camera
[0042] [0042] In addition, the touch screen 131 can cover a display panel 141. When a touch operation is detected on or near the touch screen 131, the touch screen 131 transmits the touch operation to the processor 170 to determine a type of touch event, and then processor 170 provides a corresponding visual output on display panel 141 based on the type of touch event. In this embodiment, the touchscreen and the display unit can be integrated into a component to implement terminal 100 input, output and display functions. For ease of description, in the modalities of the present invention, a touch display is used to represent touch screen and display unit functions. In some embodiments, the touchscreen and the display unit can alternatively be two independent components.
[0043] [0043] Memory 120 can be configured to store an instruction and data. Memory 120 may mainly include an instruction storage area and a data storage area. The data storage area can store an association relationship between a finger touch gesture and an application function. The instruction storage area can store software units such as an operating system, an application, and an instruction required for at least one function, or a subset or extended set of that. Memory 120 may further include non-volatile random access memory, and provide processor 170 with a program and instruction to manage hardware, software and a data resource from a computing and processing device, to support control over software and a app. Memory 120 can further be configured to store a multimedia file and to store an operating program and an application.
[0044] [0044] Processor 170 is a control center for terminal 100 and uses several interfaces and lines to connect parts of an entire mobile phone. Processor 170 performs various functions of terminal 100 and processes data by executing an instruction stored in memory 120 and invoking data stored in memory 120, to perform full monitoring of the mobile phone. Optionally, processor 170 may include one or more processing units. Optionally, an application processor and a modem processor can be integrated into processor 170, where the application processor mainly processes an operating system, a user interface, an application and the like, and the modem processor mainly processes wireless communication. It can be understood that the modem processor may not be integrated with processor 170. In some embodiments, the processor and memory may be implemented on a single chip. In some embodiments, alternatively, the processor and memory can be implemented on independent chips respectively. The processor 170 can also be configured to generate a corresponding operation and control signal, send the signal to a corresponding component of the computing and processing device, and read and process data in software, especially the data and a program in memory 120, from so that each function module of the computing and processing device performs a corresponding function, to control the corresponding component to operate as required by the instruction.
[0045] [0045] The radio frequency unit 110 can be configured to transmit and receive information Or transmit and receive a signal in a calling process, and particularly, after receiving downlink information from a base station, send the downlink information to the processor 170 for processing; and sending projected uplink data to the base station. Generally, the RF unit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer and the like. In addition, the radio frequency unit 110 can communicate with a network device and another device via wireless communication. Wireless communication can use any communications standard or protocol, including but not limited to: global system for mobile communications (Global System for Mobile communications, GSM), general packet radio service system (General Packet Radio Service, GPRS) , code division multiple access (Code Division Multiple Access, CDMA), broadband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long-term evolution (Long Term Evolution, LTE), email , short message service (Short Messaging Service, SMS) and the like.
[0046] [0046] Audio circuit 160, speaker 161 and microphone 162 can provide an audio interface between the user and terminal 100. Audio circuit 160 can transmit an electrical signal converted from the audio data received overhead speaker 161, and speaker 161 converts the electrical signal to an audio signal for output. In another aspect, microphone 162 is configured to collect an audio signal and convert an incoming audio signal into an electrical signal. Audio circuit 160 receives the electrical signal and converts the electrical signal to audio data and then sends the audio data to processor 170 for processing, and radio frequency unit 110 sends audio data to another terminal. Alternatively, the audio data is sent to memory 120 for further processing. The audio circuit can also include a headset 163, configured to provide a connection interface between the audio circuit and a headset.
[0047] [0047] Terminal 100 also includes power supply 190 (such as a battery) that supplies power to each component. Optionally, the power supply can be logically connected to the processor 170 using a power supply management system, to implement functions such as load, discharge and power consumption management using the power supply management system.
[0048] [0048] Terminal 100 also includes external interface 180, and the external interface can be a standard Micro USB interface or a multi-pin connector. External interface 180 can be configured to connect terminal 100 and another device for communication, and can also connect to a charger to charge the terminal
[0049] [0049] Terminal 100 may also include a wireless fidelity flash module (wireless fidelity, WiFi), a Bluetooth module, sensors with different functions and the like, which are not, however, shown in the figure. Details are not described in this document. All of the methods described below can be applied to the terminal shown in Figure 1.
[0050] [0050] As shown in Figure 2, one embodiment of the present invention reveals a method of photographing. The method is applied to a shooting terminal, and the terminal includes a first camera, a second camera and a third camera; the first camera and the third camera are color cameras, the second camera is a black and white camera, the resolution of the second camera is higher than the resolution of the first camera and higher than the resolution of the third camera, and the first camera, the second camera and the third camera are all cameras that use primary lenses; an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera; and the method includes the following steps: Step 21: Obtain a target zoom ratio.
[0051] [0051] Step 22: Determine at least one camera from the first camera, the second camera and the third camera based on the target zoom ratio as a target camera.
[0052] [0052] Step 23: Capture, using the target camera, at least one image that includes a target scene.
[0053] [0053] Step 24: Obtain an output image of the target scene based on at least one image that includes the target scene. The target scene is a scene that a user expects to photograph. The resolution of the output image is less than the resolution of the second camera.
[0054] [0054] The three preceding cameras can be located in front of a terminal device or behind the terminal device. A specific arrangement of the cameras can be determined flexibly according to a designer's requirement. This is not limited in this patent application.
[0055] [0055] It is an industry convention to convert imaging fields of vision on photosensitive elements of different sizes into a focal length of lens corresponding to the same field of view for imaging on a 135 film camera (with a photosensitive surface of the 135 film camera, and a 35 mm film specification). The focal length obtained through conversion is an equivalent focal length of the 135 film camera. The size of a photosensitive element (CCD or CMOS) in a digital camera varies depending on the camera (the size is, for example, 1.016 cm (1 / 2.5 inch) or 1.411 cm (1 / 1.8 inch)). Therefore, lenses of the same focal length have different fields of image generation in digital cameras with photosensitive elements of different sizes. However, what really makes sense to the user is a photograph range (a size of a field of view) from a camera. In other words, people are more concerned with the equivalent focal length than with an actual focal length.
[0056] [0056] In a specific implementation process, the equivalent focal length of the third camera is greater than both the equivalent focal length of the first camera and the equivalent focal length of the second camera. In addition to the combination (27 mm, 27 mm, 80 mm) of the equivalent focal lengths of the first camera, the second camera and the third camera in the preceding example, the equivalent focal length of the first camera and the equivalent focal length of the second camera may alternatively have other values selected from 25 mm to 35 mm, and the equivalent focal length of the third camera can be 2 to 4 times the equivalent focal length of the first camera or the second camera. Like a camera using a telephoto lens, the third camera undertakes a task of obtaining a lossless zoom image according to a requirement for greater zoom when images obtained by the first camera and the second camera may no longer achieve lossless zoom upon use of an algorithm. This amplification is determined by a maximum zoom ratio that can be achieved when the performance of parameters and algorithms of the first camera and the second camera obtain a lossless output image, and this amplification is, for example, 2.5 times, 3 times or 3.5 times. This is merely used as an example and is not limited.
[0057] [0057] In a specific implementation process, an opening of the second camera is greater than both an opening of the first camera and an opening of the third camera. For example, an f number for the second camera is 1.65, an f number for the first camera is 1.8, and an f number for the third camera is 2.4; for another example, an f number for the second camera is 1.55, an f number for the first camera is 1.7, and an f number for the third camera is 2.2. This is merely used as an example and is not limited.
[0058] [0058] In a specific implementation process, the resolution of the second camera is higher than the resolution of the first camera and higher than the resolution of the third camera. In addition to the combination (20 M, 10 M, 10 M) of the second camera resolution, the first camera resolution and the third camera resolution in the previous example, the first camera resolution and the third camera resolution can be, for example , a combination (20 M, 10 M, 8 M), a combination (24 M, 12 M, 12 M) or a combination (24 M, 12 M, 10 M). This is merely used as an example and is not limited.
[0059] [0059] The color camera can be understood as an RGB sensor that can capture color information of the target scene and take a color photo. The black and white camera can be understood as a monochrome sensor that captures only a black and white scene. Since the monochrome sensor can capture more details of a scene, the black and white camera can capture details and an outline in the target scene.
[0060] [0060] It should be understood that an image generation principle of the black and white camera determines that, compared to a color camera of the same resolution, the black and white camera has greater resolution power and a greater ability to present details. Specifically, when a black and white camera and a color camera have the same resolution and the same pixel size (pixel size), the resolution power in a diagonal direction of an image captured by the black and white camera is twice the power of resolution in a diagonal direction of an image captured by the color camera. In addition, if a higher resolution black and white camera is used, for example, a ratio of black and white camera output resolution to color camera output resolution is T, for an output image composed of respectively captured images using the black and white camera and the color camera, the optical zoom capabilities in the horizontal and vertical directions are increased T times and the optical zoom capability in the diagonal direction is increased 2T times, compared to a zoom capability of the color camera. For example, if the resolution of the color camera is 12 M (3,968 * 2,976) and the resolution of the black and white camera is M (5,12073,840), the optical zoom capability is increased
[0061] [0061] If both the black and white camera and the color camera participate in the image generation, rich color information captured by the color camera can be merged with clear details captured by the black and white camera to obtain a higher quality photo.
[0062] [0062] Specifically, in step 21, obtaining a target zoom ratio means obtaining an amplification selected by the user, for example, 1.5x (1.5x) zoom, 2x (2x) zoom and 3x zoom ( 3x). A predefined field of view can be used as a reference for a target zoom ratio, and the predefined field of view can be flexibly selected by the user or the designer, for example, a FOV (field of view) of 78 degrees as a reference. A target zoom ratio value is represented as n. For example, if the adjustable accuracy of a camera's focal length is 0.1, nx can be 1.1x, 1.2x or 1.3x, or if the adjustable accuracy of a local length is 0.5, nx can be be 1.5x, 2.0x, 2.5x or similar.
[0063] [0063] It should be understood that the user can select a zoom ratio by using a zoom ratio button on a photography device or by entering a gesture command on a display screen of the photography device. Alternatively, the zoom ratio can be determined by a system based on user input at a specific position.
[0064] [0064] In addition, when the user presses a photograph button on the photograph device or receives the gesture command inserted in the photograph device's screen, that is, a shutter is triggered, a target camera captures the image that includes the scene target. Specifically, the target camera can capture at least one image at an exposure time, and the terminal processes the captured images to obtain the output image of the target scene.
[0065] [0065] It should be understood that the target scene is a scene that the user expects to photograph, and a preview image obtained when a camera system is adjusted to apply the target zoom ratio is the user's most intuitive perception of the target scene. However, all cameras provided in the present invention are cameras that use primary lenses. If a shooting distance from a camera using a main lens is fixed, a photo view is fixed; therefore, an image actually captured by the target camera has a greater view than the target scene, that is, the image that includes the target scene is captured.
[0066] [0066] With reference to Figure 2, Figure 3 shows a relatively specific camera design according to one embodiment of the present invention. The project includes three cameras that use primary lenses. With reference to different features of the three cameras, at different zoom ratios, at least one camera is selectively activated to capture an image, and image processing is performed on the captured image to achieve a lossless zoom of approximately 5x. In summary, for a photo image by a camera of a nx zoom and a photo image at a distance of 1 / na from an object without a zoom, if details and definition of two images are equivalent, the nx zoom it is referred to as a lossless zoom. A 6otic zoom is generally used as a reference, and the optical zoom is considered to be lossless; therefore, if a zoom effect is similar to that of optical zoom, the zoom can be referred to as lossless zoom. There are some objective tests that can be used to measure an image's resolution and definition capacity, for example, the Siemens star graph provided by Image Engineering (IE).
[0067] [0067] The present invention can be implemented in a portable mobile terminal or a smart photography terminal, such as a mobile phone or a tablet. The user enters a zoom mode and selects a zoom ratio, a camera that needs to be activated is determined by a photography system on the terminal based on the user's zoom ratio and a predetermined camera combination mode. The activated camera is used to consecutively photograph a plurality of image frames (if there is a plurality of activated cameras, the plurality of cameras performs photography synchronously), and a clear zoom image is obtained from the multi-frame image by means of use of a corresponding predefined algorithm.
[0068] [0068] Based on Figure 3, specific examples are used below to describe, based on cases, different ways of photographing and ways of image processing provided in the present invention at different target zoom ratios, where specific parameters of the first camera they can be as follows: equivalent focal length of 27 mm, color camera and 10 M resolution; specific parameters of the second camera can be as follows: equivalent focal length of 27 mm, black and white camera and resolution of 20 M; and specific parameters of the third camera can be as follows: equivalent focal length of 80 mm, color camera and resolution of 10 M.
[0069] [0069] Reference can be made to Figure 4.
[0070] [0070] S101. When the target zoom ratio is within the range of lx to 3x, the terminal activates the first camera and the second camera.
[0071] [0071] When the user sets a camera parameter, once the target zoom ratio determined by the user is within a range (1, 3), the terminal activates the first camera (a primary camera) and the second camera ( secondary camera). In this case, the preview image changes accordingly, and the preview image is an image of a target scene that the user expects to photograph. The first camera is a primary camera and the second camera is a secondary camera; therefore, the preview image is a part of an image actually captured by the first camera and a part size is determined by both the target zoom ratio and a predetermined length-to-width ratio (for example, 4: 3 or 16: 9 ) of the output image. It should be understood that images actually captured by the first camera and the second camera are different from the image content of the actual preview image (this also applies to the third camera in the description below). The images actually captured by the first camera and the second camera may not be visible to the user. The preview image is the user's intuitive perception of the target scene that the user expects to photograph and a more intuitive representation of the target zoom ratio.
[0072] [0072] For ease of description, a width of an image actually photographed by the first camera is represented as wo, a height of this is represented by ho and the resolution of the first camera is wo * ho; and a width of an image actually photographed by the second camera is represented as wi, a height of this is represented by hi and the resolution of the second camera is wi * hi. Once the resolutions of the first camera and the second camera are determined, wo, ho, wi and h 'can be considered constant.
[0073] [0073] In one case, if wo and ho correspond to the predetermined length-to-width ratio of the output image, a width and height of a final output image are also wo and ho. In another case, if wo and ho do not correspond to the predetermined length-to-width ratio of the output image, and a width and height of a final output image is wW 'and ho, the camera system needs to crop a really captured image from wo * ho for an image of w9 '"* hf' before subsequent image processing is performed. It should be understood that, for ease of description of the following algorithm, the five examples in Cases 1 to 5 are all described based on a case A following case can be obtained by a person skilled in the art using common mathematical knowledge.Details are not described in the present invention.
[0074] [0074] S102. When a shooting function is triggered, the first camera and the second camera take consecutive photographs, respectively, in their respective scenes actually captured, to obtain respectively the color picture frames in black and white picture frames, where the mo can be equal to mi. A relationship of values between mM and m and specific values of mo and m are not limited in the present invention. In an implementation, values of mo and m; they can be 4, 6 or similar, the color picture frames can be consecutive or non-consecutive in a time sequence, and the black and white image frames can be consecutive or non-consecutive in the time sequence.
[0075] [0075] In an implementation, mo Ou Mm; it can be 1, but in this case, a subsequent multi-frame zoom operation does not apply, that is, after the subsequent S103 is performed, a black and white and color fusion operation in S105 is performed directly. However, mo Ou Mm should generally be greater than 1, in which case the subsequent multi-frame zoom operation is applied, that is, subsequent S103, S104 and S105 are performed.
[0076] [0076] It should be understood that the first camera and the second camera are cameras that use main lenses; therefore, an image actually photographed includes yet different content other than the preview image instead of just including a zoom target scene that the user expects to photograph and that is visible to the user in the preview image.
[0077] [0077] S103. Cut the central area (also briefly referred to as the center cut) on the color picture frames to cut the color picture frames with a wo * ho / number size from the images actually photographed by the first camera. Perform central area cutting on my black and white picture frames to cut my black and white picture frames to a size wi * hi / n from the images actually photographed by the second camera.
[0078] [0078] cutting the central area can be understood as cutting an area that the user expects to photograph, that is, cutting a valid area of a specific size while ensuring that a center of an input image remains unchanged. The cropped area is determined by both the user-specified target zoom ratio and an equivalent focal length of the camera.
[0079] [0079] Therefore, in terms of the user's intuitive perception, the target scene can strictly mean a visualization image in the target zoom ratio or broadly mean an area cut off from a central area.
[0080] [0080] sS104. Perform multi-frame zoom on the color picture frames with a size of wo * ho / no. To obtain a multi-color zoom result, namely, a color zoom picture frame of wo * ho / no. Perform multi-frame zoom on my black and white image frames with a size of wi * hi / nº to obtain a result of multi-frame black and white zoom, namely a black and white zoom image frame of wi * hi / nº.
[0081] [0081] The target scene is cut from a really captured image and an image frame of a target scene is obtained using multi-frame zoom. For an area of the target scene and a change in size, refer to Figure 5.
[0082] [0082] In a specific implementation process, nervousness inevitably occurs because the photographing device is held by the user during photography. Therefore, a plurality of image frames inevitably has different image contents, and different image frames of the same object in a target scene differ slightly in definition. Therefore, information sampled from positions of the plurality of image frames can be used to complement each other to merge into an image frame with better resolution, greater definition and less noise.
[0083] [0083] An optional multi-frame zoom algorithm procedure is as follows: (1) Select a reference frame. Common methods include: selecting frame 1, selecting a frame photo at an intermediate time, or selecting a lighter frame. For example, a lighter frame from the first two frames can be selected as the frame of reference.
[0084] [0084] In the previous procedure, motion compensation, interpolation amplification, convolutional neural network and the like can be implemented in many ways. A multi-frame zoom algorithm can also be implemented in a number of ways and the algorithm is not limited in the present invention. A person skilled in the art should understand that there are many open source algorithms that can be invoked to implement the preceding procedure and, therefore, details are not described in this document.
[0085] [0085] S105. Perform black and white and color fusion of the color zoom image frame and the black and white zoom image frame obtained in S104, to obtain a wo * ho color output image frame, namely an image output from the target scene, which is an image that can be saved by the user. In this document, the resolution of the output image of the target scene is the same as the resolution of the first camera and the resolution of the third camera.
[0086] [0086] An optional procedure for a black and white and color fusion algorithm is as follows: Algorithm procedure: (1) Select a fusion branch based on factors such as a distance from a scene. For example, color information can be merged on a black and white basis. Alternatively, a high frequency of black and white is merged on a base of color information.
[0087] [0087] In the preceding algorithm procedure, related procedural methods can use mature algorithms in the prior art, such as fusion, alignment and sharpening, which are not limited and are not described in the present invention.
[0088] [0088] As described above, both the black and white camera and the color camera participate in the image generation, rich color information captured by the color camera is merged with clear details captured by the black and white camera, to obtain a higher quality photo.
[0089] [0089] Reference can be made to Figure 6.
[0090] [0090] S201. When the target zoom ratio is within a range of 3x to 5x, theThe system needs to determine whether the target scene is in a dark environment. If the target scene is not in the dark environment, the terminal activates the third camera.
[0091] [0091] The dark environment can be determined based on whether a predefined condition is satisfied. If a light condition is less than 100 lux, the target scene is considered to be in the dark environment; and if a light condition is 100 lux, the target scene is considered not to be in the dark environment. This predefined illuminance value is determined by the user or supplier of the terminal design, which is not limited by the present invention. In a specific implementation process, the terminal can perform the determination using an ISO value used during normal exposure. If the ISO value is greater than or equal to 400, the target scene is considered to be in the dark environment; or if the ISO value is less than 400, the target scene is considered to be in a non-dark environment. A 1SO value is determined by the user or the terminal design manufacturer, which is not limited by the present invention.
[0092] [0092] When the user adjusts a camera parameter, as long as the target zoom ratio defined by the user is within a range [3, 5], the third camera is activated. In this case, the preview image changes accordingly, and the preview image is a part of an image actually captured by the third camera. A portion size is determined by both the target zoom ratio and a predefined ratio between the length and width of the output image.
[0093] [0093] s8 $ 202. When a shooting function is triggered, a third camera takes consecutive photos in a scene actually captured, to obtain m> frames of color images.
[0094] [0094] A width of an image photo by the third camera is represented as w., A height of this is represented as h, 7, and the resolution of the third camera is w2 * h2. Since the resolution of the third camera is the same as the resolution of the first camera, wo = wo Ee ho = ho. In addition, m2 can be equal to mo, and the remaining steps and attached drawings can alternatively be expressed using wo, ho and mo.
[0095] [0095] 8203. Carry out a central area cut in the color picture frames to cut the color picture frames with a size of wo * ho / (n / no) º of images actually photographed by the third camera. The equivalent focal length of the third camera is 80 mm, that is, the third camera is a camera that uses a telephoto lens,
[0096] [0096] S204. Perform multi-frame zoom on the color picture frames with a size of wo * ho / (n / no) to obtain a colorful multi-frame zoom result, namely, a wo * ho color zoom image frame, which is an output image of the target scene and is also an image that can be saved by the user. In this document, the resolution of the output image of the target scene is the same as the resolution of the first camera and the resolution of the third camera.
[0097] [0097] For a multi-frame zoom algorithm in S204, refer to the multi-frame zoom algorithm in s104.
[0098] [0098] Reference can be made to Figure 7.
[0099] [0099] 301. When the target zoom ratio is within a range of 3x to 5x, the system needs to determine if the target scene is in a dark environment. If the target scene is not in the dark environment, the terminal activates the second camera and the third camera. For the determination of the dark environment, refer to S201.
[00100] [00100] When the user adjusts a camera parameter, as long as the target zoom ratio defined by the user is within a range [3, 5], the third camera (a primary camera) and the second camera (a secondary camera) are activated. In this case, the preview image changes accordingly, and the preview image is a part of an image actually captured by the third camera. A portion size is determined by both the target zoom ratio and a predefined ratio between the length and width of the output image.
[00101] [00101] S8302. When a shooting function is triggered, the third camera and the second camera respectively take consecutive photos in their respective scenes actually captured, to obtain respectively color picture frames and black and white picture frames.
[00102] [00102] A width of an image actually made by the third camera is represented as w., A height of this is represented as h ;, and the resolution of the third camera is w2 * h7. Since the resolution of the third camera is the same as the resolution of the first camera, w2o = wo and ha2 = ho. In addition, m2 can be equal to mo, and the remaining steps and attached drawings can alternatively be expressed using wo, ho and mo.
[00103] [00103] A width of an image actually made by the second camera is represented as wi, a height of this is represented as hi, and the resolution of the second camera is wi * hi.
[00104] [00104] It should be understood that the third camera and the second camera are cameras that use primary lenses; therefore, an image actually photographed includes yet different content other than the preview image instead of just including a target zoom scene that the user expects to photograph and that is visible to the user in the preview image.
[00105] [00105] 8303. Carry out a central area cut in the color picture frames to cut the color picture frames with a size of wo * ho / (n / no) º of images actually photographed by the third camera. Perform a central area cut on my black and white picture frames to cut my black and white picture frames to a size of wi * hi / number of images actually photographed by the second camera. In this document, the number is approximately equal to 3.
[00106] [00106] S304. Perform multi-frame zoom on the color picture frames with a size of wo * ho / (n / no) obtained in S303, to obtain a color multi-frame zoom result, namely a wo * ho color zoom picture frame. Perform multi-frame zoom on my black and white picture frames with a size of wi * hi / no obtained in S303 to obtain a black and white multi-frame zoom result, namely a black and white zoom image frame of w1i * h1 / n ,
[00107] [00107] For a multi-frame zoom algorithm in S304, refer to the multi-frame zoom algorithm in s104.
[00108] [00108] S305. Make a fusion of telephoto (Tele photo) and black and white on the wo * ho color zoom picture frame and on the wo * ho / no black and white zoom picture frame obtained in S304, to obtain a picture frame of colorful output of wo * ho, namely an output image of the target scene. In this document, the resolution of the output image of the target scene is the same as the resolution of the first camera and the resolution of the third camera.
[00109] [00109] An optional procedure for a telephoto and black and white fusion algorithm is as follows: (1) Use a color zoom image (which can be referred to as a telephoto) corresponding to a telephoto lens as a reference, align a black and white zoom image to the telephoto to obtain a motion area mask.
[00110] [00110] In the preceding algorithm procedure, related processing methods use state of the art algorithms, such as fusion, alignment and sharpening, which are not limited and described in the present invention.
[00111] [00111] Reference can be made to Figure 8.
[00112] [00112] S401. When the target zoom ratio is within a range of 5x to 10x, the system needs to determine whether the target scene is in a dark environment. If the target scene is not in the dark environment, the terminal activates the third camera. For the determination of the dark environment, refer to S201.
[00113] [00113] When the user adjusts a camera parameter, as long as the target zoom ratio defined by the user is within a range (5, 10], the third camera is activated. In this case, the preview image changes accordingly, and the preview image is a part of an image actually captured by the third camera. A part size is determined by both the target zoom ratio and a predefined ratio between length and width of the output image.
[00114] [00114] S402. When a shooting function is triggered, the third camera takes consecutive photos of the scene actually captured, to obtain m> color image frames respectively.
[00115] [00115] A width of an image made by the third camera is represented as w., A height of this is represented as h ,, and the resolution of the third camera is w2 * h2. Since the resolution of the third camera is the same as the resolution of the first camera, wo = wo Ee hr = ho. In addition, m2 can be equal to mo, and the remaining steps and attached drawings can alternatively be expressed using wo, ho and mo.
[00116] [00116] S403. Cut the central area on the color picture frames to cut the color picture frames with a size of wo * ho / (n / no) º of images actually photographed by the third camera. The equivalent focal length of the third camera is 80 mm, that is, the third camera is a camera that uses a telephoto lens, and compared to a camera that uses a standard lens,
[00117] [00117] S404. Perform multi-frame zoom on the color picture frames with a size of wo * ho / (n / no) to obtain a color multi-frame zoom result, namely a color zoom image of wo * ho / (n / n1) º . In this document, n :; it is a lossless zoom capability of a terminal photography system, namely a maximum zoom ratio according to a lossless condition, for example 5x in this example. In this document, n1 is determined by parameter performance of the entire terminal photography system, and can be considered as a constant.
[00118] [00118] For a multi-frame zoom algorithm in S404, refer to the multi-frame zoom algorithm in s104.
[00119] [00119] S405. Digitally zoom the wo * ho / (n / n: i) º color zoom picture frame obtained in S404, to obtain a wo * ho color zoom image frame, namely, an output image of the target scene. In this document, the resolution of the output image of the target scene is the same as the resolution of the first camera or the resolution of the third camera.
[00120] [00120] There are many methods for digital zoom, for example, interpolation amplification, and common methods for interpolation amplification include bilinear, bicubic, Lanczos and the like. The digital zoom can only be used to zoom in resolution of an image to a target resolution, but cannot guarantee definition and resolution of the image. Therefore, compared to a lossless zoom, the digital zoom is considered to be a zoom with a specific loss, but it also indicates a specific imaging capability of a camera. Case 5
[00121] [00121] Reference can be made to Figure 9.
[00122] [00122] S501. When the target zoom ratio is within a range of 5x to 10x, the system needs to determine whether the target scene is in a dark environment. If the target scene is in the dark environment, the terminal activates the second camera and the third camera. For the determination of the dark environment, refer to S201.
[00123] [00123] When the user adjusts a camera parameter, as long as the target zoom ratio defined by the user is within a range (5, 10], the third camera (a primary camera) and the second camera (a secondary camera) In this case, the preview image changes accordingly, and the preview image is a part of an image actually captured by the third camera. A part size is determined by both the target zoom ratio and a predefined ratio between length and output image width.
[00124] [00124] S502. When a shooting function is triggered, the third camera and the second camera respectively take consecutive photos in their respective scenes actually captured, to obtain respectively color frames and black and white frames.
[00125] [00125] A width of an image actually made by the third camera is represented as w., A height of this is represented as h ,, and the resolution of the third camera is w2 * h2. Since the resolution of the third camera is the same as the resolution of the first camera, wo = wo and ha = ho. In addition, m2 can be equal to mo, and the remaining steps and attached drawings can alternatively be expressed using wo, ho and mo.
[00126] [00126] A width of an image actually made by the second camera is represented as wi, a height of this is represented as hi, and the resolution of the second camera is wi * hi.
[00127] [00127] It should be understood that the third camera and the second camera are cameras that use primary lenses; therefore, an image actually photographed includes yet different content other than the preview image instead of just including a target zoom scene that the user expects to photograph and that is visible to the user in the preview image.
[00128] [00128] S503. Cut the central area on the color picture frames to cut the color picture frames with a size of wo * ho / (n / no) º of images actually photographed by the third camera. Perform a central area cut on my black and white picture frames to cut my black and white picture frames to a size of wi * hi / number of images actually photographed by the second camera. In this document, it is not approximately equal to 3.
[00129] [00129] S504. Perform multi-frame zoom on the color picture frames with a size of wo * ho / (n / no) obtained in S503, to obtain a color multi-frame zoom result, namely, a color zoom image frame of wo * ho / (n / n1) º. Perform multi-frame zoom on my black and white image frames with a size of wo * ho / no obtained on S503, to obtain a result of multi-frame black and white zoom, namely a black and white zoom image frame of wo * ho / (n / n1) .
[00130] [00130] For a multi-frame zoom algorithm in S504, refer to the multi-frame zoom algorithm in s104.
[00131] [00131] S505. To fuse telephoto (Tele-photo) and black and white in the color zoom image frame of wo * ho / (n / n1) º and in the black and white zoom picture frame of wo * ho / (n / n1) º obtained in S504, to obtain a color zoom image frame of wo * ho / (n / n1) º . In this document, n is a lossless zoom capability, namely, a maximum zoom ratio according to a lossless condition, for example, 5x. In this document, n1 is determined by parameter performance of the entire terminal photography system, and can be considered as a constant.
[00132] [00132] S506. Digitally zoom the wo * ho / (n / n1) º color zoom picture frame obtained in S505, to obtain a wo * ho color zoom image frame, namely an output image of the target scene. In this document, the resolution of the output image of the target scene is the same as the resolution of the first camera or the resolution of the third camera.
[00133] [00133] A digital zoom algorithm is a mature technology in the state of the art, and reference can be made to S405.
[00134] [00134] It should be understood that the five preceding cases are merely some optional implementations in the present invention, and the specific parameters mentioned above vary with the design of a camera parameter, an algorithm implementation, a user configuration, an operating system terminal and a terminal environment. In addition, expressions of the same parameters vary with the criteria of different references. Settings for specific parameters cannot all be listed. A person skilled in the art should understand that the present invention is intended to correspondingly use, according to different user zoom requirements, different ways of combining lenses to obtain figures, a final figure according to a corresponding algorithm, and to obtain quality lossless image generation, within a wide wide zoom range from lx to 5x. If a maximum target zoom ratio according to a lossless condition is approximately adjusted, a person skilled in the art can follow the ways of lens combinations according to the modalities of the present invention and adaptively change the lens parameter, or use different types of algorithms to achieve approximately lossless zoom. If the user allows a limited loss of definition of a zoom image, or the terminal device allows to use a larger telephoto lens, the zoom ranges and lens combinations in the previous modes can all be adjusted accordingly, based on the previous theory , to obtain an image that meets the user's requirement. These variable technical solutions should fall within the scope of protection of the present invention.
[00135] [00135] It should be understood that, in a process of use by the user, due to the actual requirement of the user, different zoom ranges are continuously used in a short period of time during a focusing process. Changing these zoom ranges directly causes a change in camera activation. The five preceding cases are used as an example: for a specific activation state for each camera, refer to the state of the three cameras within different zoom ranges in Figure
[00136] [00136] It should also be understood that in the preceding modalities, the resolution of the output image is the same as the resolution of the first camera or the resolution of the third camera and is less than the resolution of the second camera. In fact, the resolution of the output image should satisfy the user's requirement for definition, and is not necessarily equal to the resolution of the first camera or the resolution of the third camera. Generally, the first camera or the third camera represents the most basic imaging performance of the shooting terminal in different shooting modes. Therefore, the maximum resolution of the output image is approximately equal to the resolution of the first camera or the resolution of the third camera. Generally, during delivery of a terminal, the maximum resolution of the output image is basically determined, and the user can establish the resolution of the output image in the camera system according to the user's requirement.
[00137] [00137] In addition, in a specific implementation process, the shooting system is further configured to adjust an image generation parameter of an optical zoom module according to a zoom mode of the target scene. The imaging parameter includes at least one of the following: a noise reduction parameter, a sharpening parameter, or contrast, to control noise reduction, sharpening, contrast, and a dynamic range of an image during an intermediate process. For example, in a bright scene, an ISP module is controlled to disable a noise and sharpness reduction module, and in a low-light scenario, the ISP module is controlled to activate the noise and sharpness reduction module and adjust the parameter to an appropriate level. In addition, since the contrast and dynamic range parameters in the zoom mode are different from those in a common shooting mode, the contrast and dynamic range parameters can be adjusted in a custom way in different zoom modes. Therefore, according to the methods in the modalities of the present invention, the image generation parameter can be configured according to different scenarios, to guarantee the image generation quality of a final image.
[00138] [00138] According to the present invention, an approximately 5x lossless zoom effect can be achieved on a smartphone, and a relatively good balance between resolution and noise can be achieved even in the dark environment. Using a combination of a plurality of cameras that use primary lenses instead of a functional device of a large size does not significantly increase the thickness of the terminal, thereby ensuring the aesthetics of the terminal, especially for a portable smart device such as a mobile phone, satisfying user requirements for a small, low-profile terminal and lossless image generation on a large zoom, and improving the user's experience of use.
[00139] [00139] Based on the method of photographing provided in the above-mentioned embodiment, one embodiment of the present invention provides a photographing apparatus 700, and the apparatus 700 may be applied to various photographing devices. As shown in Figure 10, apparatus 700 includes an acquisition mode 701, a determination module 702, a capture module 703, an image processing module 704, a first camera, a second camera and a third camera; the first camera and the third camera are color cameras, the second camera is a black and white camera, and the resolution of the second camera is higher than the resolution of the first camera and higher than the resolution of the third camera, and the first camera, the second camera and third camera are all cameras that use primary lenses; and an equivalent focal length of the third camera is greater than both the equivalent focal length of the first camera and an equivalent focal length of the second camera. For related resources, refer to the description in the method mentioned above.
[00140] [00140] The obtaining module 701 is configured to obtain a target zoom ratio, and the obtaining module 701 can be obtained by a processor that invokes a corresponding program instruction based on an input from the outside.
[00141] [00141] The determination module 702 is configured to determine at least one camera from the first camera, the second camera and the third camera based on the target zoom ratio as a target camera, and the determination module 702 can selectively activate the three cameras preceded by the processor invoking a program instruction stored in memory.
[00142] [00142] Capture module 703 is configured to capture, using the target camera, an image that includes a target scene, and capture module 703 can be implemented by the processor, and stores a captured image in memory.
[00143] [00143] The image processing module 704 is configured to obtain an output image of the target scene based on the captured image that includes the target scene. The image processing module 704 can be implemented by the processor, and can be implemented by invoking data and an algorithm in a local memory or a cloud server for corresponding computing, and a figure of the target scene that a user expects to obtain is issued.
[00144] [00144] In a specific implementation process, the procurement module 701 is specifically configured to perform the method mentioned in step 21 and an equivalent replaceable method; determination module 702 is specifically configured to perform the method mentioned in step 22 and an equivalently replaceable method; capture module 703 is specifically configured to perform the method mentioned in step 23 and an equivalent replaceable method; and the image processing module 704 is specifically configured to perform the method mentioned in step 24 and an equivalent replaceable method.
[00145] [00145] More specifically, in different target zoom ratios: the obtaining module 701 and the determining module 702 can collaboratively perform the methods of S101, S201, S301, S401 or S501; capture module 703 can perform the methods of S102, S202, S302, S402 or S502; and the image processing module 704 can perform the methods from S103 to S105, S203 and S204, S303 to S305, S403 to S405, or S503 to S506.
[00146] [00146] The previous specific method modalities and the description of explanation, description and extension of a plurality of ways of implementing the technical resources in the modalities are also applicable to the execution of methods in the apparatus. Details are not described in the device modes.
[00147] [00147] The present invention provides an image processing apparatus 700. Different combinations of cameras can be used for different zoom requirements, to photograph and process images to achieve approximately 5x lossless zoom effect without using a very bulky device, this improving the user experience and an image quality requirement.
[00148] [00148] It should be noted that the division of the 700 device modules is merely a division of logic functions. During actual implementation, all or some of the modules can be integrated into a physical entity Or they can be physically separate. For example, the modules can be processing elements arranged separately or they can be integrated into a terminal chip for implementation. In addition, modules can be stored in program code form in a controller storage element and invoked by a processor processing element to perform module functions. In addition, the modules can be integrated or can be implemented independently. The processing element in this document can be an integrated circuit chip and have a signal processing capability. In an implementation process, steps in the preceding methods or in the preceding modules can be implemented using a hardware integrated logic in the processing element or using an instruction in a software form. The processing element can be a general purpose processor, for example, a central processing unit, or it can be configured as one or more integrated circuits to implement the preceding methods, for example, one or more more application-specific integrated circuit, abbreviated ASIC, one or more digital signal processors (abbreviated DSP), one or more arrays of field-programmable gate array, FPGA abbreviated).
[00149] [00149] A person skilled in the art should understand that the modalities of the present invention can be provided as a method, a system or a computer program product. Therefore, the present invention can use a form of hardware-only modalities, software-only modalities or modalities with a combination of software and hardware. In addition, the present invention may use a form of a computer program product that is implemented as one or more storage media usable on a computer (including but not limited to disk memory, CD-ROM, optical memory and that include computer-usable program code.
[00150] [00150] The present invention is described with reference to the flowcharts and / or block diagrams of the method, the device (system) and the computer program product according to the modalities of the present invention. It should be understood that computer program instructions can be used to implement each process and / or each block in flowcharts and / or block diagrams and in a combination of a process and / or a block in flowcharts and / or diagrams of blocks. These computer program instructions can be provided to a general purpose computer, a dedicated computer, an embedded processor, or a process from another programmable data processing device to generate a machine, so that instructions executed by a computer or a processor from another programmable data processing device generates an apparatus for implementing a specific function in one or more processes in flowcharts and / or in one or more blocks in block diagrams.
[00151] [00151] These computer program instructions can be stored in a computer-readable memory that can instruct the computer or other programmable data processing device to function in a specific way, so that the instructions stored in the computer-readable memory generate an artifact that includes an instructional apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and / or in one or more blocks in the block diagrams.
[00152] [00152] These computer program instructions can be loaded onto a computer or other programmable data processing device, so that a series of operations and steps are performed on the computer or on the other programmable device, thereby generating processing implemented in a computer . Therefore, instructions executed on the computer or other programmable device provide steps to implement a specific function in one or more processes in flowcharts and / or in one or more blocks in block diagrams.
[00153] [00153] Although some modalities of the present invention are described, a person skilled in the art can make a change and a modification in these modalities once he learns the basic inventive concept. Therefore, it is intended to consider the following claims as covering the listed modalities and all changes and modifications that fall within the scope of the present invention. Obviously, a person skilled in the art can make several modifications and variations in the modalities of the present invention without departing from the scope of the modalities of the present invention.
The present invention is intended to cover modifications and variations in the modalities of the present invention as long as these modifications and variations fall within the scope of protection defined by the following claims and their equivalent technologies.
权利要求:
Claims (20)
[1]
1. Method of photographing, characterized by the fact that the method comprises: selecting, from a first camera, a second camera and a third camera that are comprised in a terminal, at least one camera to capture at least one image that comprises a target scene, in which the first camera and the third camera are color cameras, and the second camera is a black and white camera; the first camera, the second camera and the third camera are all cameras that use primary lenses; an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera; the first camera, the second camera and the third camera are located on the back side of the terminal; and the equivalent focal length of the third camera is 2 to 4 times the equivalent focal length of the first camera; and obtaining an output image of the target scene based on at least one image comprising the target scene.
[2]
2. Method, according to claim 1, characterized by the fact that the selection, from a first camera, a second camera and a third camera that are comprised in a terminal, of at least one camera to capture at least one an image comprising a target scene comprises: when a current zoom ratio is within a zoom range of (1, 3), respectively, capture, using the first camera and the second camera, at least one color image and at least one black and white image comprising the target scene.
[3]
3. Method, according to claim 1, characterized by the fact that the selection, from a first camera, a second camera and a third camera that are comprised in a terminal, of at least one camera to capture at least one image comprising a target scene comprises: when a current zoom ratio is within a zoom range of [3, 10], and the illuminance of the target scene is less than a predefined limit, respectively capture, using the third camera and from the second camera, at least one color image and at least one black and white image that comprises the target scene.
[4]
4. Method, according to claim 1, characterized by the fact that the selection, from a first camera, a second camera and a third camera that are comprised in a terminal, of at least one camera to capture at least one image that comprises a target scene understands: when a current zoom ratio is within a zoom range of [3, 10], and the illuminance of the target scene is not less than a predefined limit, capture, using the third camera, at least one color image that comprises the target scene.
[5]
Method according to any one of claims 1 to 4, characterized in that the equivalent focal length of the first camera is 26 mm or 27 mm.
[6]
6. Method according to any one of claims 1 to 5, characterized in that the aperture value of the first camera is 1.7 or 1.8.
[7]
Method according to any one of claims 1 to 6, characterized in that the aperture value of the third camera is 2.4 or 2.2.
[8]
8. Method according to any one of claims 1 to 7, characterized in that the resolution of the first camera is 10 M or 12 M, and the resolution of the third camera is 8 M, 10 M or 12 M.
[9]
9. Terminal, characterized by the fact that the terminal comprises a memory, a processor, a first camera, a second camera and a third camera, in which the first camera and the third camera are color cameras, the second camera is a black camera and white, and the first camera, the second camera, and the third camera are all cameras that use primary lenses; an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera; the first camera, the second camera and the third camera are located at the rear of the terminal; and the equivalent focal length of the third camera is 2 to 4 times the equivalent focal length of the first camera; the memory is configured to store a computer program and an instruction; and the processor invokes The computer program and the instruction that are stored in memory, to perform the following operations: select, from the first camera, the second camera and the third camera, at least one camera to capture at least one image that comprises a target scene; and obtaining an output image of the target scene based on at least one image comprising the target scene.
[10]
10. Terminal, according to claim 9, characterized by the fact that the processor is specifically configured for: when a current zoom ratio is within a zoom range of (1, 3), respectively to capture, using the first camera and the second camera, at least one color image and at least one black and white image comprising the target scene; or when a current zoom ratio is within a zoom range of [3, 10], and the illuminance of the target scene is less than a predefined limit, respectively capture, using the third camera and the second camera, at least one color image and at least one black and white image comprising the target scene; or when a current zoom ratio is within a zoom range of [3, 10], and the illuminance of the target scene is not less than a predefined limit, capture, using a third camera, at least one color image comprising the target scene.
[11]
11. Terminal according to claim 9 or 10 characterized by the fact that the equivalent focal length of the first camera is 26 mm or 27 mm.
[12]
12. Terminal according to any one of claims 9 to 11, characterized by the fact that an aperture value of the first camera is 1.7 or 1.8.
[13]
13. Terminal according to any one of claims 9 to 12, characterized in that the aperture value of the third camera is 2.4 or 2.2.
[14]
14. Terminal according to any of claims 9 to 13, characterized in that the resolution of the first camera is 10 M or 12 M, and the resolution of the third camera is 8 M, 10 M or 12 M.
[15]
15. Photographing device, characterized by the fact that the device comprises: a capture module, configured to select, from a first camera, a second camera and a third camera that are comprised in a terminal, at least one camera for capture at least one image that comprises a target scene, in which the first camera and the third camera are color cameras, the second camera is a black and white camera; the first camera, the second camera and the third camera are all cameras that use primary lenses; an equivalent focal length of the third camera is greater than both an equivalent focal length of the first camera and an equivalent focal length of the second camera; the first camera, the second camera and the third camera are located at the rear of the terminal; and the equivalent focal length of the third camera is 2 to 4 times the equivalent focal length of the first camera; and an image processing module, configured to obtain an output image of the target scene based on at least one image comprising the target scene.
[16]
16. Apparatus according to claim 15,
characterized by the fact that: when a current zoom ratio is within a zoom range of (1, 3), the capture module is specifically configured to respectively capture, using the first camera and the second camera, at least one image color and at least one black and white image comprising the target scene; or when a current zoom ratio is within a zoom range of [3, 10], and the illuminance of the target scene is less than a predefined limit, the capture module is specifically configured to respectively capture, using the third camera and from the second camera, at least one color image and at least one black and white image comprising the target scene; or when a current zoom ratio is within a zoom range of [3, 10], and the illuminance of the target scene is not less than a predefined limit, the capture module is specifically configured to capture using the third camera , at least one color image that comprises the target scene.
[17]
17. Apparatus according to claim 15 or 16, characterized by the fact that the equivalent focal length of the first camera is 26 mm or 27 mm.
[18]
18. Apparatus according to any one of claims 15 to 17, characterized in that the aperture value of the first camera is 1.7 or 1.8.
[19]
19. Apparatus according to any one of claims 15 to 18, characterized in that the aperture value of the third camera is 2.4 or 2.2.
[20]
20. Apparatus according to any one of claims 15 to 19, characterized in that the resolution of the first camera is 10 M or 12 M, and the resolution of the third camera is 8 M, 10 M or 12 M.
类似技术:
公开号 | 公开日 | 专利标题
BR112020015673A2|2020-12-08|METHOD, APPARATUS AND PHOTOGRAPHY DEVICE
JP6238184B2|2017-11-29|Imaging device
US9357126B2|2016-05-31|Imaging operation terminal, imaging system, imaging operation method, and program device in which an operation mode of the operation terminal is selected based on its contact state with an imaging device
US10827140B2|2020-11-03|Photographing method for terminal and terminal
WO2021047077A1|2021-03-18|Image processing method, apparatus, and device based on multiple photographing modules, and medium
US10810720B2|2020-10-20|Optical imaging method and apparatus
JP6119235B2|2017-04-26|Imaging control apparatus, imaging system, imaging control method, and program
CN109923850B|2020-09-11|Image capturing device and method
JP2014103643A|2014-06-05|Imaging device and subject recognition method
US10827107B2|2020-11-03|Photographing method for terminal and terminal
WO2018032259A1|2018-02-22|Image processing device and image processing method
KR101390099B1|2014-04-29|Self-shooting camera with zoom-out features
JP6811935B2|2021-01-13|Imaging equipment, image processing methods and programs
WO2020003944A1|2020-01-02|Imaging device, imaging method, and program
CN113875220A|2021-12-31|Shooting anti-shake method and device, terminal and storage medium
JP2016063472A|2016-04-25|Portable terminal and method of controlling portable terminal
同族专利:
公开号 | 公开日
US20200358954A1|2020-11-12|
CN111885294A|2020-11-03|
CN108391035B|2020-04-21|
CN111373727B|2021-01-12|
AU2019241111A1|2020-07-02|
CN111373727A|2020-07-03|
JP6945744B2|2021-10-06|
SG11202005392TA|2020-07-29|
WO2019184686A1|2019-10-03|
KR102310430B1|2021-10-07|
CN111669493A|2020-09-15|
CN111641778B|2021-05-04|
CA3085555A1|2019-10-03|
EP3713209A4|2021-03-03|
US11206352B2|2021-12-21|
KR20200100673A|2020-08-26|
CN111669493B|2021-04-09|
JP2021507646A|2021-02-22|
CN111885295A|2020-11-03|
CN108391035A|2018-08-10|
EP3713209A1|2020-09-23|
CN111641778A|2020-09-08|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JP2005031466A|2003-07-07|2005-02-03|Fujinon Corp|Device and method for imaging|
JP2005109623A|2003-09-29|2005-04-21|Minolta Co Ltd|Multiple-lens imaging apparatus and mobile communication terminal|
US7916180B2|2004-08-25|2011-03-29|Protarius Filo Ag, L.L.C.|Simultaneous multiple field of view digital cameras|
CN101135769B|2006-08-29|2010-06-09|亚洲光学股份有限公司|Zoom lens|
FI20085510A|2008-05-28|2009-11-29|Valtion Teknillinen|Zoom camera arrangement that includes multiple sub-cameras|
US8149323B2|2008-12-18|2012-04-03|Qualcomm Incorporated|System and method to autofocus assisted by autoexposure control|
US8665316B2|2009-11-24|2014-03-04|Microsoft Corporation|Multi-resolution digital large format camera with multiple detector arrays|
US10412367B2|2011-08-05|2019-09-10|3D Media Ltd|Multi-lens camera with a single image sensor|
EP2592823A3|2011-10-12|2013-06-19|Canon Kabushiki Kaisha|Image-capturing device|
US20130258044A1|2012-03-30|2013-10-03|Zetta Research And Development Llc - Forc Series|Multi-lens camera|
CN109963059B|2012-11-28|2021-07-27|核心光电有限公司|Multi-aperture imaging system and method for acquiring images by multi-aperture imaging system|
US10223838B2|2013-03-15|2019-03-05|Derek A. Devries|Method and system of mobile-device control with a plurality of fixed-gradient focused digital cameras|
EP3008890A4|2013-06-13|2016-05-04|Corephotonics Ltd|Dual aperture zoom digital camera|
FR3009395B1|2013-07-31|2016-12-23|Dxo Labs|DEVICE FOR SHOOTING WITH A PLURALITY OF CAMERA MODULES|
US9443335B2|2013-09-18|2016-09-13|Blackberry Limited|Using narrow field of view monochrome camera for producing a zoomed image|
CN105340267A|2013-12-06|2016-02-17|华为终端有限公司|Method for generating picture and twin-lens device|
US9609200B2|2014-09-24|2017-03-28|Panavision International, L.P.|Distance measurement device for motion picture camera focus applications|
KR20160038409A|2014-09-30|2016-04-07|엘지전자 주식회사|Mobile terminal and method for controlling the same|
KR102359268B1|2015-02-16|2022-02-08|삼성전자주식회사|Data Processing Device For Processing Multiple Sensor Data and System including the Same|
US20160353012A1|2015-05-25|2016-12-01|Htc Corporation|Zooming control method for camera and electronic apparatus with camera|
US10129483B2|2015-06-23|2018-11-13|Light Labs Inc.|Methods and apparatus for implementing zoom using one or more moveable camera modules|
KR102336447B1|2015-07-07|2021-12-07|삼성전자주식회사|Image capturing apparatus and method for the same|
EP3598737B1|2015-08-13|2020-12-09|Corephotonics Ltd.|Dual aperture zoom camera with video support and switching / non-switching dynamic control|
US9948857B2|2015-10-22|2018-04-17|Samsung Electronics Co., Ltd.|Method and device for generating images|
KR101751140B1|2015-12-24|2017-06-26|삼성전기주식회사|Image sensor and camera module|
CN109120821A|2016-01-20|2019-01-01|深圳富泰宏精密工业有限公司|More lens systems, its working method and portable electronic device|
CN107295225B|2016-04-12|2020-07-10|三星电机株式会社|Camera module|
US10701256B2|2016-06-12|2020-06-30|Apple Inc.|Switchover control techniques for dual-sensor camera system|
CN105979145A|2016-06-22|2016-09-28|上海顺砾智能科技有限公司|Shooting system and shooting method for improving aerial image quality of unmanned aerial vehicle|
CN105954881A|2016-07-21|2016-09-21|合肥工业大学|Mobile phone camera zoom lens|
CN106210524B|2016-07-29|2019-03-19|信利光电股份有限公司|A kind of image pickup method and camera module of camera module|
CN107734214A|2016-08-10|2018-02-23|宁波舜宇光电信息有限公司|With the multi-cam module of different size aperture and its application|
CN106454077B|2016-09-26|2021-02-23|宇龙计算机通信科技有限公司|Shooting method, shooting device and terminal|
CN206698308U|2016-11-08|2017-12-01|聚晶半导体股份有限公司|Photographing module and camera device|
KR102204596B1|2017-06-02|2021-01-19|삼성전자주식회사|Processor, image processing device comprising the same, and method for image processing|
US10681273B2|2017-08-24|2020-06-09|Samsung Electronics Co., Ltd.|Mobile device including multiple cameras|
US10200599B1|2017-09-07|2019-02-05|Qualcomm Incorporated|Image capture setting determination in devices having access to multiple cameras|
US10630895B2|2017-09-11|2020-04-21|Qualcomm Incorporated|Assist for orienting a camera at different zoom levels|
CN107819992B|2017-11-28|2020-10-02|信利光电股份有限公司|Three camera modules and electronic equipment|
CN111641778B|2018-03-26|2021-05-04|华为技术有限公司|Shooting method, device and equipment|CN111641778B|2018-03-26|2021-05-04|华为技术有限公司|Shooting method, device and equipment|
CN110838088A|2018-08-15|2020-02-25|Tcl集团股份有限公司|Multi-frame noise reduction method and device based on deep learning and terminal equipment|
CN110913122B|2018-09-18|2021-08-10|北京小米移动软件有限公司|Multi-camera zooming method, device, equipment and storage medium|
US11080832B2|2018-09-26|2021-08-03|Canon Kabushiki Kaisha|Image processing method, image processing apparatus, imaging apparatus, and storage medium|
CN111741211A|2019-03-25|2020-10-02|华为技术有限公司|Image display method and apparatus|
CN110602392B|2019-09-03|2021-10-01|Oppo广东移动通信有限公司|Control method, imaging module, electronic device and computer-readable storage medium|
CN110572581B|2019-10-14|2021-04-30|Oppo广东移动通信有限公司|Zoom blurring image acquisition method and device based on terminal equipment|
CN110855884B|2019-11-08|2021-11-09|维沃移动通信有限公司|Wearable device and control method and control device thereof|
CN113037988A|2019-12-09|2021-06-25|Oppo广东移动通信有限公司|Zoom method, electronic device, and computer-readable storage medium|
CN111107265B|2019-12-25|2021-10-22|RealMe重庆移动通信有限公司|Image processing method and device, computer readable medium and electronic equipment|
CN111447359B|2020-03-19|2021-07-02|展讯通信(上海)有限公司|Digital zoom method, system, electronic device, medium, and digital imaging device|
CN113497881A|2020-03-20|2021-10-12|华为技术有限公司|Image processing method and device|
CN113452898A|2020-03-26|2021-09-28|华为技术有限公司|Photographing method and device|
CN111757005A|2020-07-06|2020-10-09|Oppo广东移动通信有限公司|Shooting control method and device, computer readable medium and electronic equipment|
CN112118394B|2020-08-27|2022-02-11|厦门亿联网络技术股份有限公司|Dim light video optimization method and device based on image fusion technology|
CN112291472B|2020-10-28|2021-09-28|Oppo广东移动通信有限公司|Preview image processing method and device, storage medium and electronic equipment|
CN113206960A|2021-03-24|2021-08-03|上海闻泰电子科技有限公司|Photographing method, photographing apparatus, computer device, and computer-readable storage medium|
法律状态:
2021-12-07| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
CN201810254358.5|2018-03-26|
CN201810254358.5A|CN108391035B|2018-03-26|2018-03-26|Shooting method, device and equipment|
PCT/CN2019/077640|WO2019184686A1|2018-03-26|2019-03-11|Photographing method, device, and equipment|
[返回顶部]