Archive for the ‘generated’ tag
Machine learning-guided realization of full-color high-quantum-yield carbon quantum dots – Nature.com
Posted: June 11, 2024 at 2:48 am
Workflow of ML-guided synthesis of CQDs
Synthesis parameters have great impacts on the target properties of resulting samples. However, it is intricate to tune various parameters for optimizing multiple desired properties simultaneously. Our ML-integrated MOO strategy tackles this challenge by learning the complex correlations between hydrothermal/solvothermal synthesis parameters and two target properties of CQDs in a unified MOO formulation, thus recommending optimal conditions that enhance both properties simultaneously. The overall workflow for the ML-guided synthesis of CQDs is shown in Fig.1 and Supplementary Fig.1. The workflow primarily consists of four key components: database construction, multi-objective optimization formulation, MOO recommendation, and experimental verification.
It consists of four key components: database construction, multi-objective optimization (MOO) formulation, MOO recommendation, and experimental verification.
Using a representative and comprehensive synthesis descriptor set is of vital importance in achieving the optimization of synthesis conditions36. We carefully selected eight descriptors to comprehensively represent the hydrothermal system, one of the most common methods to prepare CQDs. The descriptor list includes reaction temperature (T), reaction time (t), type of catalyst (C), volume/mass of catalyst (VC), type of solution (S), volume of solution (VS), ramp rate (Rr), and mass of precursor (Mp). To minimize human intervention, the bounds of synthesis parameters are determined primarily by the constraints of the synthesis methods and equipment used, instead of expert intuition. For instance, in employing hydrothermal/solvothermal method to prepare CQDs, as the reactor inner pot is made of polytetrafluoroethylene material, the usage temperature should be 220oC. Moreover, the capacity of the reactor inner pot used in the experiment is 25mL, with general guidance of not exceeding 2/3 of this volume for reactions. Therefore, in this study, the main considerations of experimental design are to ensure experimental safety and accommodate the limitations of equipment. These practical considerations naturally led to a vast parameter space, estimated at 20 million possible combinations, as detailed in Supplementary Table1. Briefly, the 2,7-naphthalenediol molecule along with catalysts such as H2SO4, HAc, ethylenediamine (EDA) and urea, were adopted in constructing the carbon skeleton of CQDs during the hydrothermal or solvothermal reaction process (Supplementary Fig.2). Different reagents (including deionized water, ethanol, N,N-dimethylformamide (DMF), toluene, and formamide) were used to introduce different functional groups into the architectures of CQDs, combined with other synthesis parameters, resulting in tunable PL emission. To establish the initial training dataset, we collected 23 CQDs synthesized under different randomly selected parameters. Each data sample is labelled with experimentally verified PL wavelength and PLQY (see Methods).
To account for the varying importance of multiple desired properties, an effective strategy is needed to quantitatively evaluate candidate synthesis conditions in a unified manner. A MOO strategy has thus been developed that prioritizes full-color PL wavelength over PLQY enhancement, by assigning an additional reward when maximum PLQY of a color surpassing the predefined threshold for the first time. Given (N) explored experimental conditions, {(({x}_{i},,{y}_{i}^{c},,{y}_{i}^{gamma }{|; i}=(1,2,ldots,N))}, ({x}_{i}) indicates the (i)-th synthesis condition defined by 8 synthesis parameters, ({y}_{i}^{c}) and ({y}_{i}^{gamma }) indicate the corresponding color label and yield (i.e., PLQY) given ({x}_{i}); ({y}_{i}^{c}in left{{c}_{1},,,{c}_{2},ldots,{c}_{M}right}) for (M) possible colors, ({y}_{i}^{gamma }in left[0,,1right]). The unified objective function is formulated as the sum of maximum PLQY for each color label, i.e.,
$$mathop{sum}nolimits_{{c}_{j}}{Y}_{{c}_{j}}^{max },$$
(1)
where (jin left{1,,2,,ldots,,Mright}) and ({Y}_{{c}_{j}}^{max }) is 0 if (nexists {y}_{i}^{c}={c}_{j}); otherwise
$${Y}_{{c}_{j}}^{max }={max }_{i}left[Big({y}_{i}^{gamma }+R{{cdot }}{mathbb{1}}left({y}_{i}^{gamma }ge alpha right)Big){{cdot }}{mathbb{1}}left({y}_{i}^{c}={c}_{j}right)right].$$
(2)
({mathbb{1}}({{cdot }})) is an indicator function that output 1 if true, otherwise outputs 0. The term (Rcdot {mathbb{1}}({y}_{i}^{gamma }ge alpha )) enforces a higher priority of full-color synthesis, where PLQY for each color shall be at least (alpha) ((alpha=0.5) in our case) to have an additional reward of (R) ((R=10) in our settings). (R) can be any real value larger than 1 (i.e., maximum possible improvement of PLQY for one synthesis condition), to ensure the higher priority of exploring synthesis conditions for colors in which yield has not achieved (alpha). We set (R) to 10, such that the tens digit of unified objective functions value clearly indicates the number of colors with maximum PLQYs exceeding (alpha), and the units digit reflects the sum of maximum PLQYs (without the additional reward) for all colors. As defined by the ranges of PL wavelength in Supplementary Table2, seven primary colors considered in this work are purple (<420nm), blue (420 and <460nm), cyan (460 and <490nm), green (490 and <520nm), yellow (520 and <550nm), orange (550 and <610nm), and red (610nm), i.e., (M=7). Notably, the proposed MOO formulation unifies the two goals of achieving full color and high PLQY into a single objective function, providing a systematical approach to tune synthesis parameters for the desired properties.
The MOO strategy is premised on the prediction results of ML models. Due to the high-dimensional search space and limited experimental data, it is challenging to build models that generalize well on unseen data, especially considering the nonlinear nature of the condition-property relationship37. To address this issue, we employed a gradient boosting decision tree-based model (XGBoost), which has proven advantageous in handling related material datasets (see Methods and Supplementary Fig.3)30,38. In addition, its capability to guide hydrothermal synthesis has been proven in our previous work (Supplementary Fig.4)21. Two regression models, optimized with the best hyperparameters through grid search, were fitted on the given dataset, one for PL wavelength and the other for PLQY. These models were then deployed to predict all unexplored candidate synthesis conditions. The search space for candidate conditions is defined by the Cartesian product of all possible values of eight synthesis parameters, resulting in ~20 million possible combinations (see Supplementary Table1). The candidate synthesis conditions, i.e., unexplored regions of the search space, are further ranked by MOO evaluation strategy with the prediction results.
Finally, the PL wavelength and PLQY values of the CQDs synthesized under the top two recommended synthesis conditions are verified through experiments and characterization, whose results are then augmented to the training dataset for the next iteration of the MOO design loop. The iterative design loops continue until the objectives are fulfilled, i.e., when the achieved PLQY for all seven colors surpasses 50%. In prior studies on CQDs, its worth noting that only a limited number of CQDs with short-wavelength fluorescence (e.g., blue and green), have reached PLQYs above 50%39,40,41. On the other hand, their long-wavelength counterparts, particularly those with orange and red fluorescence, usually demonstrate PLQYs under 20%42,43,44. Underlining the efficacy of our ML-powered MOO strategy, we have set an ambitious goal for all fluorescent CQDs: the attainment of PLQYs exceeding 50%. The capacity to modulate the PL emission of CQDs holds significant promise for various applications, spanning from bioimaging and sensing to optoelectronics. Our four-stage workflow is crafted to forge an ML-integrated MOO strategy that can iteratively guide hydrothermal synthesis of CQDs for multiple desired properties, while also constantly improving the models prediction performance.
To assess the effectiveness of our ML-driven MOO strategy in the hydrothermal synthesis of CQDs, we employed several metrics, which were specifically chosen to ascertain whether our proposed approach not only meets its dual objectives but also enhances prediction accuracy throughout the iterative process. The unified objective function described above measures how well the two desired objectives have been realized experimentally, and thus can be a quantitative indicator of the effectiveness of our proposed approach in instructing the CQD synthesis. The evaluation output of the unified objective function after a specific ML-guided synthesis loop is termed as objective utility value. The MOO strategy improves the objective utility value by a large margin of 39.27% to 75.44, denoting that the maximum PLQY in all seven colors exceeds the target of 0.5 (Fig.2a). Specifically, at iterations 7 and 19, the number of color labels with maximum PLQY exceeding 50% increases by one, resulting in an additional reward of 10 each time. Even on the seemingly plateau, the two insets illustrate that the maximally achieved PLQY is continuously enhanced. For instance, during iterations 8 to 11, the maximum PLQY for cyan emission escalates from 59% to 94%, and the maximum PLQY for purple emission rises from 52% to 71%. Impressively, our MOO approach successfully fulfilled both objectives within only 20 iterations (i.e., 40 guided experiments).
a MOOs unified objective utility versus design iterations. b Color explored with new synthesized experimental conditions. Value ranges of colors defined by PL wavelength: purple (PL<420nm), blue (420nm PL<460nm), cyan (460nm PL<490nm), green (490nm PL<520nm), yellow (520nm PL<550nm), orange (550nm PL<610nm), and red (610nm PL). It shows that while high PLQY has been achieved for red, orange, and blue in the initial dataset, the MOO strategy purposefully enhances PLQYs for yellow, purple, cyan, green respectively in subsequent synthesized conditions in a group of five. c MSE between the predicted and real target properties. d Covariance matrix for correlation among the 8 synthesis parameters (i.e., reaction temperature T, reaction time t, type of catalyst C, volume/mass of catalyst VC, type of solution S, volume of solution VS, ramp rate Rr, and mass of precursor Mp) and 2 target properties, i.e., PLQY and PL wavelength (PL ). e Two-dimensional t-distributed stochastic neighbor embedding (t-SNE) plot for the whole search space, including unexplored (circular points), training (star-shaped points), and explored (square points) conditions, where the latter two sets are colored by real PL wavelengths.
Figure2b reveals that the MOO strategy systematically explores the synthesis conditions for each color, addressing those that have not yet achieved the designed PLQY threshold, starting with yellow in the first 5 iterations and ending with green in the last 5 iterations. Notably, within each quintet of 5 iterations, a singular color demonstrates an enhancement in its maximum PLQY. Initially, the PLQY for yellow surges to 65%, which is then followed by a significant rise in purples maximum PLQY from 44% to 71% during the next set of 5 iterations. This trend continues with cyan and green, where the maximum PLQY escalates to 94% and 83% respectively. Taking into account both the training set (i.e., the first 23 samples) and the augmented dataset, the peak PLQY for all colors exceeds 60%. Several colors approach 70% (including purple, blue, and red), and some are near 100% (including cyan, green, and orange). This further underscores the effectiveness of our proposed ML technique. A more detailed visualization of the PL wavelength and PLQY along each iteration is provided in Supplementary Fig.5.
The MOO strategy ranks candidate synthesis conditions based on ML prediction; thus, it is vital to evaluate the ML models performance. Mean squared error (MSE) is employed as the evaluation metric, commonly used for regression, which is computed based on the predicted PL wavelength and PLQY from the ML models versus the experimentally determined values45. As shown in Fig.2c, the MSE of PLQY drastically decreases from 0.45 to approximately 0.15 within just four iterations a notable error reduction of 64.5%. The MSE eventually stabilizes around 0.1 as the iterative loops progress. Meanwhile, the MSE of PL wavelength remains consistently low, always under 0.1. MSE of PL wavelength is computed after normalizing all values to the range of zero to one for a fair comparison, thus MSE of 0.1 signifies a favorable deviation within 10% between the ML-predicted values and the experimental verifications. This indicates that the accuracies of our ML models for both PL wavelength and PLQY consistently improve, with predictions closely aligning with actual values after enhanced learning from augmented data. This not only demonstrates the efficacy of our MOO strategy in optimizing multiple desired properties but also in refining ML models.
To unveil the correlation between synthesis parameters and target properties, we further calculated the covariance matrix. As illustrated in Fig.2d, the eight synthesis parameters generally exhibit low correlation among each other, indicating that each parameter contributes unique and complementary information for the optimization of the CQDs synthesis conditions. In terms of the impact of these synthesis parameters on target properties, factors such as reaction time and temperature are found to influence both PL wavelength and PLQY. This underscores the importance for both experimentalists and data-driven methods to adjust them with higher precision. Besides reaction time and temperature, PL wavelength and PLQY are determined by distinct sets of synthesis parameters with varying relations. For instance, the type of solution affects PLQY with a negative correlation, while solution volume has a stronger positive correlation with PLQY. This reiterates that, given the high-dimensional search space, the complex interplay between synthesis parameters and multiple target properties can hardly be unfolded without capable ML-integrated methods.
To visualize how the MOO strategy has navigated in the expansive search space (~20 million) using only 63 data samples, we have compressed the initial training, explored, and unexplored space into two dimensions by projecting them into a new reduced embedding space using t-distributed stochastic neighbor embedding (t-SNE)46. As shown in Fig.2e, discerning distinct clustering patterns by color proves challenging, which emphasizes the intricate task of uncovering the relationship between synthesis conditions and target properties. This complexity further underscores the critical role of a ML-driven approach in deciphering the hidden intricacies within the data. The efficacy of ML models is premised on the quality of training data. Thus, selecting training data that span as large search space as possible is particularly advantageous to models generalizability37. As observed in Fig.2e, our developed ML models benefit from the randomly and sparsely distributed training data, which in turn encourage the models to further generalize to previously unseen areas in the search space, and effectively guide the searching of optimal synthesis conditions within this intricate multi-objective optimization landscape.
With the aid of ML-coupled MOO strategy, we have successfully and rapidly identified the optimal conditions giving rise to full-color CQDs with high PLQY. The ML-recommended synthesis conditions that produced the highest PLQY of each color are detailed in the Methods section. Ten CQDs with the best optical performance were selected for in-depth spectral investigation. The resulting absorption spectra of the CQDs manifest strong excitonic absorption bands, and the normalized PL spectra of the CQDs displayed PL peaks ranging from 410nm of purple CQDs (p-CQDs) to 645nm of red CQDs (r-CQDs), as shown in Fig.3a and Supplementary Fig.6. This encompasses a diverse array of CQD types, including p-CQDs, blue CQDs (b-CQDs, 420nm), cyan CQDs (c-CQDs, 470nm), darkcyan CQDs (dc-CQDs, 485nm), green CQDs (g-CQDs, 490nm), yellow-green CQDs (yg-CQDs, 530nm), yellow CQDs (y-CQDs, 540nm), orange CQDs (o-CQDs, 575nm), orange red CQDs (or-CQDs, 605nm), and r-CQDs. Importantly, PLQY of most of these CQDs were above 60% (Supplementary Table3), exceeding the majority of CQDs reported to date (Supplementary Table4). Corresponding photographs of full-color fluorescence ranging from purple to red light under UV light irradiation are provided in Fig.3b. Excellent excitation-independent behaviors of the CQDs have been further revealed by the three-dimensional fluorescence spectra (Supplementary Fig.7). Furthermore, a comprehensive investigation of the time-resolved PL spectra revealed a notable trend. The monoexponential lifetimes of CQDs progressively decreased from 8.6ns (p-CQDs) to 2.3ns (r-CQDs) (Supplementary Fig.8). This observation signified that the lifetimes of CQDs diminished as their PL wavelength experiences a shift towards the red end of the spectrum47. Moreover, the CQDs also demonstrate long-term photostability (>12hours), rendering them potential candidates for applications in optoelectronic devices that require stable performance over extended periods of time (Supplementary Fig.9). All the results together demonstrate the high quality and great potential of our synthesized CQDs.
a Normalized PL spectra of CQDs. b Photographs of CQDs under 365 nm-UV light irradiation. c Dependence of the HOMO and LUMO energy levels of CQDs.
To gain further insights into the properties of the synthesized CQDs, we calculated their bandgap energies using the experimentally obtained absorption band values (Supplementary Fig.10 and Table5). It is revealed that the calculated bandgap energies gradually decrease from 3.02 to 1.91eV from p-CQDs to r-CQDs. In addition, we measured the highest occupied molecular orbital (HOMO) energy levels of the CQDs using ultraviolet photoelectron spectroscopy. As shown in the energy diagram in Fig.3c, the HOMO values exhibit wave-like variations without any discernible pattern. This result further suggests the robust predictive and optimizing capability of our ML-integrated MOO strategy, which enabled the successful screening of these high-quality CQDs from vast and complex search space using only 40 sets of experiments.
To uncover the underlying mechanism of the tuneable optical effect of the synthesized CQDs, we have carried out a series of characterizations to comprehensively investigate their morphologies and structures (see Methods). X-ray diffraction (XRD) patterns with a single graphite peak at 26.5 indicate a high-degree graphitization in all CQDs (Supplementary Fig.11)15. Raman spectra exhibit a stronger signal intensity for the ordered G band at 1585cm1 compared to the disordered D band at 1397cm1, further confirming the high-degree graphitization (Supplementary Fig.12)48. Fourier-transform infrared (FT-IR) spectroscopy was then performed to detect the functional groups in CQDs, which clearly reveals the NH2 and NC stretching at 3234 and 1457cm1, respectively, indicating the presence of abundant NH2 groups on the surface of CQDs, except for orange CQDs (o-CQDs) and yellow CQDs (y-CQDs) (Supplementary Fig.13)49. The C=C aromatic ring stretching at 1510cm1 confirms the carbon skeleton, while three oxide-related peaks, i.e., OH, C=O, and CO stretching, were observed at 3480, 1580, and 1240cm1, respectively, due to abundant hydroxyl groups of the precursor. The FT-IR spectrum also shows a stretching vibration band SO3 at 1025cm1, confirming the additional functionalization of y-CQDs by SO3H groups.
X-ray photoelectron spectroscopy (XPS) was adopted to further probe the functional groups in CQDs (Supplementary Fig.14 to 23). XPS survey spectra analysis reveals three main elements in CQDs, i.e., C, O, and N, except o-CQDs and y-CQDs. Specifically, o-CQDs and y-CQDs lack the N element and y-CQDs contains S element. The high-resolution C1s spectrum of CQDs can be deconvoluted into three peaks, including a dominant CC/C=C graphitic carbon bond (284.8eV), CO/CN (286eV), and carboxylic C=O (288eV), revealing the structures of CQDs. The N1s peak at 399.7eV indicates the presence of NC bonds, verifying the successful N-doping in the basal plane network structure of CQDs, except o-CQDs and y-CQDs. The separated peaks of O1s at 531.5 and 533eV indicate the two forms of oxyhydrogen functional groups with C=O and CO, respectively, consistent with the FT-IR spectra50. The S2p band of y-CQDs can be decomposed into two peaks at 163.5 and 167.4eV, representing SO3/2P3/2 and SO3/2P1/2, respectively47,51. Combining the results of structure characterization, the excellent fluorescence properties of the CQDs are attributed to the presence of N-doping, which reduces non-radiative sites of CQDs and promotes the formation of C=O bonds. The C=O bonds play a crucial role in radiation recombination and can increase the PLQY of the CQDs.
To gain deeper insights into the morphology and microstructures of the CQDs, we have then conducted transmission electron microscopy (TEM). The TEM images demonstrate uniformly shaped and monodisperse nanodots, with the gradual increase of average lateral sizes ranging from 1.85nm for p-CQDs to 2.3nm for r-CQDs (Fig.4a and Supplementary Fig.24), which agrees with the corresponding PL wavelength, providing further evidence for the quantum size effect of CQDs (Fig.4a)47. High-resolution TEM images further reveal the highly crystalline structures of CQDs with well-resolved lattice fringes (Fig.4b-c). The measured crystal plane spacing of 0.21nm corresponds to the (100) graphite plane, further corroborating the XRD data. Our analysis suggests that the synthesized CQDs possess a graphene-like high-crystallinity characteristic, thereby giving rise to their superior fluorescence performance.
a The lateral size and color of full-color fluorescent CQDs (inset: dependence of the PL wavelength and the lateral size of full-color fluorescent CQDs). Data correspond to meanstandard deviation, n=3. b, c High-resolution TEM images and the fast Fourier transform patterns of p-, b-, c-, g-, y-, o- and r-CQDs, respectively. d Boxplots of PL wavelength (left)/PLQY (right) and 7 synthesis parameters of CQDs. VC is excluded here as its value range is dependent on C, whose relationships with other parameters are not directly interpretable. The labels at the bottom indicate the minimum value (inclusive) for the respective bins, whereas the bins on the left are the same as the discretization of colors in Supplementary Table2, the bins on the right are uniform. Each box spans vertically from the 25th percentile to the 75th percentile, with the horizontal line marking the median and the triangle indicating the mean values. The upper and lower whiskers extend from the ends of the box to the minimum and maximum data values.
Following the effective utilization of ML in thoroughly exploring the entire search space, we proceeded to conduct a systematic examination of 63 samples using box plots, aiming to elucidate the complex interplay between various synthesis parameters and the resultant optical properties of CQDs. As depicted in Fig.4d, the synthesis under conditions of high reaction temperature, prolonged reaction time, and low-polarity solvents, tends to result in CQDs with a larger PL wavelength. These findings are consistent with the general observations in the literature, which suggest that the parameters identified above can enhance precursor molecular fusion and nucleation growth, thereby yielding CQDs with increased particle size and high PL wavelength47,52,53,54,55. Moreover, a comprehensive survey of existing literature implies that precursors and catalysts, typically including electron donation and acceptance, aid in producing long-wavelength CQDs56,57. Interestingly, diverging from traditional findings, we successfully synthesized long-wavelength red CQDs under ML guidance, with 2,7-naphthalenediol containing electron-donating groups as the precursor and EDA is known for its electron-donating functionalities as the catalyst. This significant breakthrough questions existing assumptions and offers new insights into the design of long-wavelength CQDs.
Concerning PLQY, we found that catalysts with stronger electron-donating groups (e.g., EDA) led to enhanced PLQY in CQDs, consistent with earlier observations made by our research team16. Remarkably, we uncovered the significant impact of synthesis parameters on CQDs PLQY. In the high PLQY regime, strong positive correlations were discovered between PLQY and reaction temperature, reaction time, and solvent polarity, previously unreported in the literature58,59,60,61. This insight could be applied to similar systems for PLQY improvement.
Aside from the parameters discussed above, other factors such as ramp rate, the amount of precursor, and solvent volume also influence the properties of CQDs. Overall, the emission color and PLQY of CQDs are governed by complex, non-linear trends resulting from the interaction of numerous factors. Its noteworthy to mention that the traditional methods used to adjust CQDs properties often result in a decrease in PLQY as the PL wavelength redshifts4,47,51,54. However, utilizing AI-assisted synthesis, we have successfully increased the PLQY of the resulting full-color CQDs to over 60%. This significant achievement highlights the unique advantages offered by ML-guided CQDs synthesis and confirms the powerful potential of ML-based methods in effectively navigating the complex relationships among diverse synthesis parameters and multiple target properties within a high-dimensional search space.
View post:
Appraisals get transparency boost from AI, according to exec – National Mortgage News
Posted: at 2:48 am
Artificial intelligence is making the appraisal space, which has been marred by instances of bias, more transparent.
According to Kenon Chen, executive vice president of strategy and growth at Clear Capital, his company is using machine learning and other AI tools to attain fairer assessments of properties.
"[We're] trying to solve a national problem, but also make it accurate at a community level," he said. "These types of tools help us build a national standard for how this should be approached and in the underwriting process, ensure that we're measuring the quality of the appraisal and the quality of the condition ratings against a repeatable standard."
Apart from implementing machine learning and computer vision to analyze and compare property values, the real estate valuation technology company is also pondering how generative AI can be integrated into the appraisal and underwriting process.
"There's a lot of great new possibilities that are being explored with generative AI and we're certainly looking at that as well," Chen said.
"There's still obviously very real challenges that we as an industry need to tackle, particularly in the racial housing gap," he added. "We still have a long way to go in closing that gap for especially black and brown homeowners and it's important to ensure that the techniques that we're using for underwriting for other types of decisioning like appraisal are consistent and accurate in every community."
National Mortgage News sat down with Chen to talk about how artificial intelligence is helping to create fairer standards of valuing homes and also how technology is set to change the appraisal space going forward. This interview has been edited and condensed.
Continued here:
Appraisals get transparency boost from AI, according to exec - National Mortgage News
Mapping soil health: New index enhances soil organic carbon prediction – Phys.org
Posted: at 2:48 am
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
close
A cutting-edge machine learning model has been developed to predict soil organic carbon (SOC) levels, a critical factor for soil health and crop productivity. The innovative approach utilizes hyperspectral data to identify key spectral bands, offering a more precise and efficient method for assessing soil quality and supporting sustainable agricultural practices.
Soil health profoundly impacts agricultural productivity and ecological stability. Accurately assessing SOC levels is vital for enhancing crop yield and environmental sustainability. Traditional methods often fall short in precision and detail.
The new Perimeter-Area Soil Carbon Index (PASCI) addresses these gaps by utilizing hyperspectral imaging and machine learning algorithms to capture comprehensive soil characteristics. This approach not only refines SOC estimation but also supports targeted agricultural strategies and environmental monitoring, showcasing significant advancements over conventional methods.
In Geo-spatial Information Science on May 19, 2023, the researchers present their research from Central State University. The innovative tool, PASCI, employs machine learning to analyze hyperspectral data, significantly enhancing the measurement of soil carbon. PASCI provides a novel resource for scientists and agriculturists to more effectively map and assess soil health.
PASCI distinguishes itself by simultaneously analyzing multiple spectral bands to predict soil organic carbon, a method not available in current indices. This index uses a unique mathematical model to calculate the ratio of the perimeter to the area under spectral curves, pinpointing essential spectral bands that indicate SOC levels.
This approach reveals finer details about soil composition and variations across different landscapes, significantly enhancing the accuracy of SOC predictions. The robustness of PASCI was validated through extensive regression analysis, demonstrating a strong correlation with actual SOC measurements (r2 = 0.76). The index's comprehensive scope allows for better adaptation in diverse agricultural settings, potentially leading to more precise farming practices and improved crop yields.
The lead researcher says, "Our findings represent a leap forward in the remote sensing of soil organic carbon. PASCI's ability to integrate various spectral regions provides a more nuanced and accurate measure of SOC, which is vital for advancing precision agriculture and promoting sustainable land use."
PASCI's applicability is vast, offering the potential to integrate with both hyperspectral and multispectral imaging technologies. This advancement could enable large-scale detailed mapping of soil organic carbon, beneficial for agricultural planning and environmental monitoring.
The index's development aligns with the growing need for tools to assess and manage soil health, promising to enhance agricultural practices and contribute to global sustainability efforts.
More information: Eric Ariel L. Salas et al, Perimeter-Area Soil Carbon Index (PASCI): modeling and estimating soil organic carbon using relevant explicatory waveband variables in machine learning environment, Geo-spatial Information Science (2023). DOI: 10.1080/10095020.2023.2211612
Provided by Wuhan University
See the rest here:
Mapping soil health: New index enhances soil organic carbon prediction - Phys.org
WorldView Launches Referral AI to Boost Home Health and Hospice Revenue – AiThority
Posted: at 2:48 am
WorldView, a leading provider of integrated healthcare technology to the top home health and hospice EHR/EMR platforms, announced the upcoming launch of Referral AI, an enhancement to automate intake referrals using a custom AI/ML model built specific for the healthcare industry.
Referral AI uses AI/ML (Artificial Intelligence/Machine Learning) to scan and analyze dense referral document packets in seconds, detecting false positives and negatives, using custom rules to send confirmed referrals to the EHR/EMR system.
AiThority.com Latest News: Alation Has Announced an Enhanced Integration With Snowflake Horizon
In a recent survey by WorldView, confidence in a referral being acted upon quickly was a top-ranking factor for 85 percent of referring partners. WorldViews Referral AI was designed to help agencies win more business and eliminate manual workflows related to the overload of documents in their inbox.
Home health and hospice agencies receive many forms of electronic documents in their inbox, including referrals for new patient service. Referrals must be acted on quickly, but with documents being dozens of pages, they often sit unread or, worse, are missed or overlooked. Over time, the referral can become invalid, resulting in lost revenue for the agency and posing a risk of delayed service for patients.
Referral AI is a custom AI/ML model built specifically for the home-based care industry and trained on 22+ years of data, outperforming off-the-shelf AI/ML models for similar tasks in speed and accuracy.
Referral AI benefits home health and hospice agencies through cutting-edge features:
Read:Impel adds WhatsApp messaging to AI-Powered Customer Lifecycle Management Platform
Why Referral AI matters:
When we started developing our Referral AI technology, we saw first-hand how other solutions released features that inevitably created more downstream issues, saidJared Robey, SVP at WorldView. We leveraged our extensive dataset to build and train our AI/ML model, ensuring that referrals are identified accurately and routed to an intake team for prioritization. This investment allows WorldView to continue pushing automation limits to enhance user experience and increase financial success.
WorldViews Referral AI prioritizes rapid patient care and reduces the burden on back-office staff. By drastically cutting down the time needed for the intake process, Referral AI enables care coordination to begin almost immediately. The solution provides an organized and insightful overview of the referral packet, ensuring clinicians have quick access to the patients clinical history, reasons for care, and critical findings. This clarity allows admitting clinicians to focus on delivering high-quality care without sifting through extensive documentation.
Read More: L2L Introduces Powerful AI Functionality to Empower Frontline Manufacturing Teams
[To share your insights with us as part of editorial or sponsored content, please write topsen@martechseries.com]
Excerpt from:
WorldView Launches Referral AI to Boost Home Health and Hospice Revenue - AiThority
AI Stethoscope Demonstrates ‘The Power as Well as the Risk’ of Emerging Technology – The Good Men Project
Posted: at 2:48 am
By Michael Leedom
The modest stethoscope has joined the Artificial Intelligence (AI) revolution, tapping into the power of machine learning to help health-care providers screen for diseases of the heart and lung.
This year, NCH Healthcare in Naples, Fla., became the first health-care system in the U.S. to incorporate AI into its primary care clinics to screen for heart disease. The health technology company Eko Health supplied primary care physicians with digital stethoscopes linked to a deep-learning algorithm. Following a 90-day pilot program involving more than 1,000 patients with no known heart problems, the physicians discovered 136 had murmurs suggestive of structural heart disease.
Leveraging this technology to uncover heart valve disease that might otherwise have gone undetected is exciting, says Bryan Murphey, President of the NCH Medical Group, which signed an annual agreement in January with Eko to use stethoscopes with the AI platform. The numbers made sense to help our patients in a non-invasive way in the primary care setting, says Murphey.
Ekos AI tool the SENSORA Cardiac Disease Detection Platform enables stethoscopes to identify atrial fibrillation and heart murmurs. The platform added another algorithm,clearedby the U.S. Food and Drug Administration (FDA) in April, for the detection of heart failure using the Eko stethoscopes built-in electrocardiogram (ECG) feature.
AI-enhanced stethoscopes showed more than a twofold improvement over humans in identifying audible valvular heart disease, according to astudypublished inCirculationin November 2023. The AI showed a 94.1 per cent sensitivity for the detection of valve disease, outperforming the primary care physicians 41.2 per cent. The findings were confirmed with an echocardiogram of each patient.
Stethoscopes join the growing number of AI health-care applications that promise increased efficiency and improved diagnostic performance with machine learning. In recent years, the FDA has cleared hundreds of AI algorithms for use in medical practice. But as the health-care field employs AI for more services, skeptics point to risks posed by over-reliance on this black box, including the potential biases built into AI datasets and the gradual loss of clinician skills.
Since its adoption more than 200 years ago, the stethoscope has served as both a routine exam tool and a visible reminder of the doctors training. It is recognizable worldwide and, for most clinicians, has remained an analog instrument. The first electronic stethoscopes were created more than 20 years ago and feature enhancements to amplify sound and allow for digital recording.
Analog and digital stethoscopes both rely on the ability of the health-care provider to hear and interpret the sounds, which may be the first indication a patient may have a new disease. However, this is not a skill every health-care practitioner masters. The faint, low-pitched whooshing of an incompetent heart valve or the subtle crackling of interstitial lung disease may go unnoticed even by the ears of experienced physicians.
Enter AI, which can mimic the human brain using neural networks consisting of algorithms that, in the case of stethoscopes, are trained with thousands of heart or lung recordings. Instead of relying on explicit program instructions, an AI system uses machine learning to train itself through advanced pattern recognition.
The effectiveness of artificial neural networks to diagnose cardiovascular disease has been demonstrated in controlled clinical trials.
AI improved the diagnosis of heart failure by analyzing ECGs performed on more than 20,000 adult patients in a randomized controlled trial published inNature Medicine. The intervention group was more likely to be sent for a confirmatory echocardiogram, resulting in 148 new diagnoses of left ventricular systolic dysfunction.
A neural network algorithm correctly predicted 355 more patients who developed cardiovascular disease compared to traditional clinical prediction based on American College of Cardiology guidelines, according to a cohortstudyof nearly 25,000 incidents of cardiovascular disease.
These machines are very good at finding patterns that are even beyond human perception. But theres both the power as well as the risk, says Paul Yi, Director of the University of Maryland Medical Intelligent Imaging Center.
The risks include limitations in performance if AI models are not properly trained. The accuracy of the AI algorithm depends on the collection of sufficient data that is representative of the population at large.
These AI models require a large amount of data, and these data are not easy to come by.
The generalizability is a big issue, says Gaurav Choudhary, Director of Cardiovascular Research at Brown University. These AI models require a large amount of data, and these data are not easy to come by. Choudhary notes that once an algorithm is approved by the FDA, it cannot be simply revised as new recordings become available. Changes to a particular AI algorithm require a new submission to the FDA before use.
In January 2024, the World Health Organization published newguidelinesfor health-care policies and practices for AI applications. Its authors warned of several risks inherent in the use of AI tools, including the existence of bias in datasets, the transparency of the algorithms employed and the erosion of medical provider skills.
AI algorithms that interpret heart and lung recordings may not have been trained on the full spectrum of possible sounds if the data does not include a wide range of patients and ambient noises.
This technology has to be validated across a variety of murmurs in a variety of clinical environments and situations, says Andrew Choi, Professor of Medicine and Radiology at George Washington University. Many of our patients are not the ideal patients, he adds, noting that initial validation typically involves patients with clear heart sounds. In real world practice, there will be older patients, obese patients and noisy emergency departments that may compromise the precision of the AI model.
Another complication is the inscrutable nature of the algorithm. Without a clear understanding of how these systems make decisions, it may be difficult for health-care providers to discuss a management plan with patients, particularly if the AI output appears incompatible with other clinical information during the examination.
Explainability is sort of a holy grail, says Paul Friedman, Chair of the Department of Cardiovascular Medicine at Mayo Clinic and one of the developers of the AI tech that Eko Health uses. Over time, he says, more studies may elucidate how these systems process information. AI uncertainty is similar to our incomplete understanding of how certain medications actually work, he suggests. Both are used because they are consistently effective.
Im not dismissive of the importance of trying to crack the black box, but I think thats a subject for research, he says.
The introduction of AI in the exam room could both enhance diagnostic performance while disrupting the relationship between health-care provider and patient. The provider may become complacent and gradually dependent on AI for answers to clinical questions, while the patient may feel that the care is becoming depersonalized and lose confidence in the doctor.
The subconscious transfer of decision-making to an automated system is called automation bias, one of many cognitive biases the health-care provider must confront. There are many reasons providers may forgo medical training and uncritically accept the heuristics of AI, including inexperience, complex workloads and time constraints, according to a systematicreviewof the phenomenon.
It is still unclear how AI will ultimately influence the physician-patient interaction, says Yi. I think thats kind of the last mile of AI in medicine. Its this human-computer interaction piece where we know that this AI works well in the lab, but how does it work when it interacts with humans? Does it make them second guess what theyre doing? Or does it give them false confidence?
The number of AI-enhanced devices submitted to the FDA hassoaredsince 2015, with almost 700 AI medical algorithmsclearedfor market. Most applications are for radiology. AI is already being integrated into academic medical centres across North America for a variety of tasks, including diagnosing disease, projecting length of hospitalization, monitoring wearable devices and performing robotic surgery.
At Unity Health in Toronto, more than 50 AI-based innovations have been developed to improve patient care since 2017. One of these is a tool used at St. Michaels Hospital since 2020 called CHARTWatch, which sifts electronic health records, including recent test results and vital signs, to predict which patients are at risk of clinical deterioration. The algorithm proved to be lifesaving during the COVID pandemic, leading to a 26 per cent drop in unanticipated mortality.
I think AI is really going to transform health care, says Omer Awan, Professor of Radiology at the University of Maryland School of Medicine. He is not concerned that AI will take over physician jobs, instead predicting that AI will continue to improve efficiency and help reduce physician burnout.
Research continues on how best to incorporate AI into the primary care setting, including ethical issues such as data privacy, legal liability and informed consent. The adoption of AI may infringe on patient autonomy if medical decisions are made using algorithms without regard for patient preferences, according to a literaturereview.
Murphey says he is eager to see Eko Healths AI-paired stethoscopes improve the screening for early heart disease but remains cautious about too much use of technology.
I want to stay connected to the patient. I take pride in my patient examinations, he says. I think thats one of the important things we provide to patients in the primary care setting, and Im not looking to sever that part of the relationship.
This post was previously published on HEALTHYDEBATE.CA and is republished under a Creative Commons license.
***
All Premium Members get to view The Good Men Project with NO ADS.
A $50 annual membership gives you an all access pass. You can be a part of every call, group, class and community. A $25 annual membership gives you access to one class, one Social Interest group and our online communities. A $12 annual membership gives you access to our Friday calls with the publisher, our online community.
Need more info? A complete list of benefits is here.
Photo credit: iStock.com
Original post:
Machine learning and AI enable early lameness detection – Farmers Guardian
Posted: at 2:48 am
You are currently accessing Farmers Guardain via your Enterprise account.
If you already have an account please use the link below to sign in.
If you have any problems with your access or would like to request an individual access account please contact our customer service team.
Phone: +44 (0) 1858 438800
Email: [emailprotected]
Go here to read the rest:
Machine learning and AI enable early lameness detection - Farmers Guardian
Meta will use your social media posts to train its AI. Europe gets an opt out – The Register
Posted: at 2:48 am
Meta will start training its AI models using everyone's social media posts though European Union users can opt out, a luxury the rest of the world won't enjoy.
The move, which the Facebook parent detailed in an announcement today, is ostensibly to bring its machine-learning systems to Europe.
Meta has so far not included its European userbase in its AI training data, presumably to avoid legal conflict with the continent's privacy regulations. Now it's pushing ahead with that despite complaints.
"To properly serve our European communities, the models that power AI at Meta need to be trained on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe who will use them," the social media titan said.
"To do this, we want to train our large language models that power AI features using the content that people in the EU have chosen to share publicly on Meta's products and services."
As training AI from user data is doubtlessly going to be contentious in Europe, Meta has attempted to cover itself in two ways. Firstly, when it says "public content," Meta means posts, comments, photos, and other content posted on its social media platforms by users over the age of 18. Private messages are, apparently, strictly verboten from the training data.
Meta also says it has sent billions of notifications to European users since May 22 to give them a chance to decline before the AI training rules kick in worldwide on June 26. The Instagram goliath says any user can decline, no questions asked, and that their posts won't be used to train AI models now or ever.
This is substantially different from the rest of the world, where opting out just isn't a choice. Granted, it's already too late to opt out for training data used for Meta's LLaMa 3, but even training for future models is mandatory for Facebook and Instagram users outside of the EU. Perhaps users outside of Europe will be able to choose to opt out in the future, but for now it's a feature exclusive to the EU.
Although Meta likely feels that it's in a good position to start using European user data, it's hard to imagine there being no pushback at all. Before the social media giant even made its public announcement, it signaled its intentions via an update to its privacy policy last week. That prompted consumer privacy advocacy group noyb to file complaints across Europe.
Noyb claims the collection of user data needs to be opt-in, not opt-out, by default. The fact that data can't really be scrubbed from an LLM or other AI model is also likely to cause problems due to the European Union's Right to be Forgotten.
Plus, Meta and the EU are not on the best of terms. Just this year the EU launched probes into Meta including one concerning child safety and another about misinformation surrounding the now-concluded EU parliamentary elections. While it's not clear if Meta will have its way in the end, it's hard to imagine there not being a challenge against the social network at some point or another.
See the article here:
Meta will use your social media posts to train its AI. Europe gets an opt out - The Register
The war for AI talent is heating up – The Economist
Posted: at 2:47 am
Pity OpenAIs HR department. Since the start of the year the maker of ChatGPT, the hit artificial-intelligence (AI) chatbot, has lost about a dozen top researchers. The biggest name was Ilya Sutskever, a co-founder responsible for many of the startups big breakthroughs, who announced his resignation on May 14th. He did not give a reason, though many suspect that it is linked to his attempt to oust Sam Altman (pictured), the firms boss, last December. Whatever the motivation, the exodus is not unusual at OpenAI. According to one estimate, of the 100-odd AI experts the firm has hired since 2016, about half have left.
That reflects not Mr Altmans leadership but a broader trend in the technology industry, one that OpenAI itself precipitated. Since the launch of ChatGPT in November 2022, the market for AI labour has been transformed. Zeki Research, a market-intelligence firm, reckons that around 20,000 companies in the West are hiring AI experts. Rapid advances in machine learning and the potential for a platform shifttech-speak for the creation of an all-new layer of technologyhas changed the types of skills employers are demanding and the places where those who possess them are going. The result is a market where AI talent, previously hoarded at tech giants, is becoming more distributed.
Original post:
IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report – InformationWeek
Posted: at 2:47 am
Is AI a boon or a bane to job security? A security tool or a vulnerability? Mature enterprise technology or immature toy? Essential enterprise technology or threat to humanity?
According to survey respondents from InformationWeek's latest State of AI Report, its all of the above.
More than a year after generative artificial intelligence became widely available to the public, we polled 292 people directly involved with AI at their organizations.
Unsurprisingly, results reveal that adoption of AI is widespread, and businesses are using the technology for a wide range of different taskswith 85% of respondents describe their organizations approach to AI as pioneering or curious but cautious.
But expectations about this novel technology are also quite different from reality. So far, AI hasnt significantly affected headcount, and respondents overwhelmingly feel their own jobs are safe from its reach.
On the other hand, concerns around data security, hallucinations, and the reliability of outcomes are weighing on respondents' minds. 53% say that, if unchecked, artificial intelligence poses a threat to humanity.
Download this free report to learn how IT departments are investing in AI now and whats guiding their plans for the future.
More here:
IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report - InformationWeek
Grant Cardone Says ‘Anyone Under 30 Years Old Should Not Even Consider Buying A Home At This Time’ – Benzinga
Posted: at 2:46 am
June 5, 2024 12:42 PM | 1 min read
This is what Nic Chahine averages with his options buys. Not selling covered calls or spreads... BUYING options. Most traders don't even have a winning percentage of 27% buying options. He has an 83% win rate.Here's how he does it.
Real estate titan Grant Cardone has said time and time again that Americans should rent instead of buying homes, and he provided a breakdown earlier today in a post on X, formerly Twitter, focused on those 30 years of age and under.
Heres the post:
Cardone asserts that individuals under 30 should avoid purchasing homes. He highlighted the financial burdens of homeownership, citing an average home price of $436,000 with annual expenses of around $50,000. These costs include interest, property taxes, HOA fees, PMI, and maintenance, amounting to approximately $4,200 per month.
Cardone contrasts this with renting, which he notes can cost under $2,000 monthly. He emphasized the benefits of renting, including no long-term commitments, no down payments, and greater mobility. Cardone concludes that buying a house is no longer the "American dream" for younger generations.
This most recent post echoes what he said in an April 12 post where he gave seven reasons why Americans should rent instead of buying a home.
Keep Reading:
Image Credit: YouTube
This is what Nic Chahine averages with his options buys. Not selling covered calls or spreads... BUYING options. Most traders don't even have a winning percentage of 27% buying options. He has an 83% win rate.Here's how he does it.
Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!
2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Read the original here: