<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Android Authority]]></title><description><![CDATA[Android Authority]]></description><link>https://androidauthority.dev</link><generator>RSS for Node</generator><lastBuildDate>Sat, 07 Mar 2026 15:07:44 GMT</lastBuildDate><atom:link href="https://androidauthority.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><atom:link rel="first" href="https://androidauthority.dev/rss.xml"/><atom:link rel="next" href="https://androidauthority.dev/rss.xml?after=Njc4MDI3YTQyNzI2ZGMzYjgwNWJmOTA4XzIwMjUtMDEtMDlUMTk6NDY6NDQuODAzWg=="/><item><title><![CDATA[Preparing for AI interviews]]></title><description><![CDATA[<p>We have a video channel recommendation if you want to prepare for AI interviews. This is a very evolving area and there is too much of activity going on in his space.</p>
<p><a target="_blank" href="https://www.youtube.com/@PrincipleAI">Principle AI</a> is a channel that focuses on AI mentorship they have an excellent collection of shorts and videos about interview preparation, AI concepts and so on. They cover AI concepts in less than 40 seconds and they have 5 minute longer videos. They also have videos answering more specific system design questions.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=EkJTiGqSqMk&amp;t=13s">https://www.youtube.com/watch?v=EkJTiGqSqMk&amp;t=13s</a></div>
]]></description><link>https://androidauthority.dev/preparing-for-ai-interviews</link><guid isPermaLink="true">https://androidauthority.dev/preparing-for-ai-interviews</guid><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[System Design : Architectural Patterns for Agentic AI based deployments]]></title><description><![CDATA[<p>How do we deploy agents in a large organization to deal with scale ? Agentic AI involves autonomous systems acting on their own. They are ultimately powered by models and these models need to be updated as well. In addition to doing their work they also need to ensure they adhere to AI safety rules.</p>
<p>Agents can be seen as microservices as well. But they are much less formal in their behaviour than well defined microservices. Also, microservices are mostly synchronous. Agentic AI is asynchronous.</p>
<p><img src="https://documents.lucid.app/documents/0572c920-9b78-4192-839f-ab2734f85da0/pages/0_0?a=5068&amp;x=-934&amp;y=1307&amp;w=1167&amp;h=287&amp;store=1&amp;accept=image%2F*&amp;auth=LCA%20960f1e9f279126e62ed92287e183fe16336c4fa98db299753f7541bee7388694-ts%3D1749402235" alt /></p>
<p>Agent 1 here spends its own time executing the commands and achieving the objective and eventually returns an output. Agent is actually a model that creates a plan and hands it over to another system to execute the plan. Agents themselves dont have a notion of state.</p>
<h2 id="heading-pra-loop">PRA Loop</h2>
<p>Perceive - Reason - Act Loop is how most agents work today. Perceive step involves the agent using all its sensors to gather as much information as it can about the current state. Based on that current state it creates a reasoning chain of what needs to be done and a rough plan of goal and sub-goals. Once that is done it determines the next action. Then performs that next action. However the results of this next action might change the state of the world which the agent needs to factor in. So it executes the PRA steps again. This goes on until the Agent has determined that the goal has been achieved and it is time to stop.</p>
<p>Let us take an example:</p>
<p>A customer opens their banking app and decides to chat the with AI chatbot. The chatbot knows the customerId. It can fetch all the accounts and recent communication sent to customer. The chatbot knows in advance that that customers account was recently blocked due to suspected fraud. This is the perceive state. In the reasoning the bot might reason that the customer is contacting to unblock the account. In the act stage the chatbot might want to ask the customer "Are you here to unblock your account ending in 0000?</p>
<p>If the customer says yes, the bot then updates its internal understanding of the state (Perceive) and then reasons what next step it should take. The next step could be to verify certain transactions on the account to decide if the transactions were indeed fraudulent.</p>
<p>For an Agent to work correctly it needs this external information too such as users recent transactions. This is achieved using another archiecture.</p>
<h2 id="heading-augmented-language-model">Augmented Language Model</h2>
<p>ALM or Augmented Language Model is the model that can call external tools. The model knows that it needs to call an external REST api to get users last 5 transactions. To achieve this, another Tool Manager works closely with the LLM to generate proper prompts.</p>
<p>For example the LLM might generate placeholder tokens that the tool manager and translates with returned response of the tool.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>PRA loop forms the basis of the Agentic AI. In our future post we will discuss how an agent fits into a larger software system.</p>
]]></description><link>https://androidauthority.dev/system-design-architectural-patterns-for-agentic-ai-based-deployments</link><guid isPermaLink="true">https://androidauthority.dev/system-design-architectural-patterns-for-agentic-ai-based-deployments</guid><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Why programming is going get lot tougher for humans]]></title><description><![CDATA[<p>AI coding assistants have been a great boon for both coders and non coders alike. I can now see children creating simple programs without ever learning to code and experienced coders using them in their day today work to be more productive.</p>
<p>Without a doubt, AI coders are getting both popular and better. Googles Jeff Dean claimed that in 1 year they will be as good as a Junior engineer.</p>
<p>To understand the impact of AI coders on the industry and the art and science of software engineering, I always give example of AutoCAD. Once upon a time people use to draw engineering diagrams on piece of paper. These were well paid jobs and considered highly skilled. As Autocad like software became widespread it did not destroy the job of draftsman but rather create far too many folks who could design using AutoCAD. The salaries went down but we saw a boom of better designed cheaper buildings and products. But there is something else happened. It is totally impossible now to design something in Autocad and then just modify its pieces through human drawings. It will be incredibly hard to design something as a hybrid process between computer design and manual design. This is because a lot of human design constraints such as inability to zoom in zoom out on a deign simply do not exist in software.</p>
<p>We are about to see the same revolution in computer programming. A lot of computer programing best practices are constrained by not real world constraints but things like human cognitive load.</p>
<p>For example, micro-services architecture isnt really great in itself but it is very good for large organizations so small teams can focus on their work and spend less time in design alignment meetings.</p>
<p>AI coders wont have such constraints. They will have different constraints. At some time AI coders compute time will become major cost center and at that point, companies would insist that the code writing process be AI First so the code base is more friendly for AI coders.</p>
<p>I expect this might mean more monolithic code and more verbose code that provides more context to AI agents.</p>
<p>Soon we are going to find that AI agents might be able to rewrite your entire codebase with high level of accuracy in matter of minutes. Want to migrate from AWS to GCP ? It could me a 1 hour job for you 20M lines of code.</p>
<h1 id="heading-few-trends-i-predict"><strong>Few trends I predict:</strong></h1>
<h2 id="heading-twitter-bootstrap-like-boilerplate-for-large-projects">Twitter Bootstrap like boilerplate for large projects.</h2>
<p>I think soon we will move to a more declarative way of defining entire software process. A bit like Docker but for your entire software process. Many large projects code bases will be written by 100% by specific AI agents and you will be encouraged to use them if you are starting a new project.</p>
<p>This would then be maintained in a source repo instead of maintaining a codebase. Codebase might end up becoming opaque for human programmers entirely.</p>
<h2 id="heading-meta-programming">Meta Programming</h2>
<p>Once AI does most of the coding, humans will not be expected to write or modify that code at all. Instead humans will add to that code using a meta language or meta programming. Humans will in short write plugins to the main software to modify its behaviour in ways AI cant.</p>
<h2 id="heading-formal-verification">Formal verification</h2>
<p>Today we write tests and insist on certain level of coverage. Then we have integration tests etc. AI will be able to verify code behaviour without actually running the code. It will be able able to write tests in totally different way than we are used to to give us very high confidence in tis behaviour.</p>
<h2 id="heading-directly-writing-binaries">Directly writing binaries</h2>
<p>Why should AI even bother writing a python script when it can directly create the binary ? Why should AI write a .java when it can directly produce .class file ? We are still far from this but I expect we will be there in next 5 years.</p>
<p>Programming languages exist because humans have cognitive issues reading and following binary code. Machines can directly write such code. AI models trained on binary files might be able to train better and produce better binaries.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>All this means writing software will have AI as its main character and we will be side characters. Tooling and sometimes even programming language syntax will evolve to make it easier for AI to write code and humans will end up becoming its assistants. In next two years we will have code bases that will not make sense to humans at all.</p>
<p>We will stare at the code with same level of confusion as we do when we accidentally open an EXE file in a text editor.</p>
]]></description><link>https://androidauthority.dev/why-programming-is-going-get-lot-tougher-for-humans</link><guid isPermaLink="true">https://androidauthority.dev/why-programming-is-going-get-lot-tougher-for-humans</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Vibe Coding Apps have never been easier]]></title><description><![CDATA[<p>Creating mobile apps is about to get lot easier. Google recently announced <a target="_blank" href="https://jules.google/">Jules</a>, their code editor. I have been playing with it and despite a limit of 5 tasks per day, I think it has been pretty powerful. It was able to write code to improve some of my docker configuration with few prompts, create a PR request and I could make things lot better in matter of minutes.</p>
<p>Arguably, this is not yet par with a junior engineer. But I expect that it will get lot better with time. In fact it could get better exponentially.</p>
<h2 id="heading-templates-are-the-way">Templates are the way</h2>
<p>It is hard to modify code written by humans. But it might be extremely easy to modify the code written by AI. I expect that in coming days we will see a lot of boilerplate apps that are 100% generated using AI. Everyone else will build on top of that. This will be a bit like Twitter Bootstrap moment for us. Twitter Bootstrap made the whole web incredibly cleaner and more beautiful.</p>
<h2 id="heading-cross-platform-code-made-easy">Cross-Platform code made easy</h2>
<p>Strategy for cross platform has been to use to frameworks like React Native or Flutter. I think this will change in favor of more standard development where you rely on AI to make code changes for feature parity. This might actually make code much better.</p>
<p>An AI agent could make the same feature change to both your Swift codebase and Android codebase at the same time.</p>
<h2 id="heading-simplified-api-contracts">Simplified API contracts</h2>
<p>API contracts have been one of the most challenging problems in computer science. The backend and front end teams work independently and there is communication overhead in agreeing on an API contract.</p>
<p>With AI being able to fully understand both client and backend code, it can design much better API contracts and also change them much easily. A lot of problems that exist today because of human cognition limits are going to disappear.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Human coding is about to get lot more complicated as result of Vibe coding. But it also means that as AI writes more and more code soon it will write more complex code than humans can understand. This might mean writing code by hand is going to get lot more tougher.</p>
]]></description><link>https://androidauthority.dev/vibe-coding-apps-have-never-been-easier</link><guid isPermaLink="true">https://androidauthority.dev/vibe-coding-apps-have-never-been-easier</guid><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Major painpoints of android development as a side activity - Part 1 Gradle]]></title><description><![CDATA[<p>I have been doing Android development as hobby for many years now and I have applications that have millions of installs. However, Android development has been painful for me. This may not be a big deal for large orgs but as a small developer who writes code only one weekends this is a different experience.</p>
<h2 id="heading-gradle-a-bit-of-frankensteins-monster">Gradle - a bit of Frankensteins monster.</h2>
<p>I come from a web background. So when I think of UI, I mostly think of HTML/React/Angular etc. The idea in web ui is very simple. HTML renders the UI and you fetch data from a remote server which does a lot of complicated actions. I am also familiar with Java SWING api that was used in past to design UI.</p>
<p>Unfortunately that mental model did not translate well to Android development for me. It was made even complicated by extremely terrible documentation and sometimes outright wrong information by Google.</p>
<p>Android Java development was broken to begin with. Firstly, Android operating system had its own problems as it had many versions and the android SDK changed quiet a bit from version to another. But that is a problem which would need another post.</p>
<p>Android java developments problems started with its completely weird programming model that scattered things into 4 different types of files.</p>
<h3 id="heading-gradle-and-build-system">Gradle and Build System</h3>
<p>The Gradle build system was an improvement over anything else we had seen in past except may be maven. But it requires you to understand Groovy a language that apparently changes its syntax every second Sunday for no rhyme or reason. I never had time to sit down and ready Groovy syntax. But it would not have helped as the language changed frequently.</p>
<p>But there is more complexity here. Gradle files are not exactly Groovy either. They are groovy based DSL (Domain specific language). Frankly I have no idea what that really means. Sure I can spend some time in mastering that but clearly it would not have helped as Android has now moved to Kotlin based DSL.</p>
<p>A build system is supposed to simplify build process so that the developer can focus on writing code. A build is supposed to contain everything that the primary code needs and as long as you have the basic underlying system then everything just builds magically and works.</p>
<p>Gradle is exact opposite of this. When you read a gradle file you will have no clue what that file is about and what it is supposed to do. An android project has three gradle files of consequence</p>
<p><strong>settings.gradle</strong></p>
<p>This file is supposed to be project level file. You wont understand much about this file. Which is fine. As long as it works and you rarely have to touch it. But you would be wrong. For some projects the order of respositories would matter. For some not.</p>
<p>Notice how same google(), mavenCentral() are repeated without a clear indication of what it means. What the pluginManagement blocks talks about is where should the build system find the plugins. What are plugins ? This is something totally opaque to an Android application developer. You never have to learn anything about them until and unless the build system breaks.</p>
<p>This is followed by dependencyResolutionManagement. This tells your build system where to find the application dependencies and in what order.</p>
<p>Then the file describes the project name and what modules are under that project.</p>
<pre><code class="lang-kotlin">pluginManagement {
    repositories {
        google()
        mavenCentral()
        gradlePluginPortal()
    }
}
dependencyResolutionManagement {
    repositoriesMode.<span class="hljs-keyword">set</span>(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
    repositories {
        google()
        mavenCentral()
    }
}
rootProject.name = <span class="hljs-string">'MyAndroidApp'</span> <span class="hljs-comment">// Name of your project</span>
include <span class="hljs-string">':app'</span>             <span class="hljs-comment">// Include the 'app' module</span>
</code></pre>
<p><strong>Project level build.gradle</strong></p>
<p>But the settings file is totally different from the project level build file. Note that here too you will specify the repositories which are also present in settings file. So this stuff is now repeated at three places. Next time when someone on stackoverflow tells you that you need to change something here to fix your build, you will have no clue where to make those changes. Also, does the order of repositories matter in this file ? Nobody knows.</p>
<p>Notice that there is dependencies block. But this dependencies have nothing to do with you actually android application but these are dependencies of the gradle build system itself.</p>
<p>Oh wait, it does not stop there. For some reason we again need to have the same repositories block again with google() and mavenCentral() mentioned there. How awesome. Why does it exist ? Nobody knows. Does the order matter here ? Nobody knows.</p>
<pre><code class="lang-kotlin"><span class="hljs-comment">// Top-level build file where you can add configuration options common to all sub-projects/modules.</span>

buildscript {
    repositories {
        google()
        mavenCentral()
    }
    dependencies {
        classpath <span class="hljs-string">'com.android.tools.build:gradle:7.4.2'</span> <span class="hljs-comment">// Specify the Android Gradle Plugin version</span>
        <span class="hljs-comment">// <span class="hljs-doctag">NOTE:</span> Do not place your application dependencies here; they belong</span>
        <span class="hljs-comment">// in the individual module build.gradle files</span>
    }
}

allprojects {
    repositories {
        google()
        mavenCentral()
    }
}
</code></pre>
<p>As an android developer the expectation is that you will rarely touch these two files. Though you will oftne find that things would break and you will have to touch these files far too foten.</p>
<p>The real build file however is build.gradle inside the module.</p>
<pre><code class="lang-kotlin">plugins {
    id <span class="hljs-string">'com.android.application'</span> <span class="hljs-comment">// Apply the Android Application plugin</span>
    id <span class="hljs-string">'org.jetbrains.kotlin.android'</span> <span class="hljs-comment">// Apply the Kotlin Android plugin</span>
    id <span class="hljs-string">'kotlin-kapt'</span>  <span class="hljs-comment">// Enable Kotlin Annotation Processing Tool (KAPT)</span>
}

android {
    namespace <span class="hljs-string">'com.example.myapp'</span> <span class="hljs-comment">// Package name of your app</span>
    compileSdk <span class="hljs-number">34</span>

    defaultConfig {
        applicationId <span class="hljs-string">'com.example.myapp'</span> <span class="hljs-comment">// Unique application ID</span>
        minSdk <span class="hljs-number">24</span>
        targetSdk <span class="hljs-number">34</span>
        versionCode <span class="hljs-number">1</span>
        versionName <span class="hljs-string">"1.0"</span>

        testInstrumentationRunner <span class="hljs-string">"androidx.test.runner.AndroidJUnitRunner"</span>
        vectorDrawables {
            useSupportLibrary <span class="hljs-literal">true</span>
        }
    }

    buildTypes {
        release {
            minifyEnabled <span class="hljs-literal">true</span>  <span class="hljs-comment">// Enable code shrinking for release builds</span>
            proguardFiles getDefaultProguardFile(<span class="hljs-string">'proguard-android-optimize.txt'</span>), <span class="hljs-string">'proguard-rules.pro'</span> <span class="hljs-comment">// ProGuard rules</span>
        }
        debug {
            debuggable <span class="hljs-literal">true</span>
        }
    }
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
    kotlinOptions {
        jvmTarget = <span class="hljs-string">'1.8'</span>
    }
    buildFeatures {
        compose <span class="hljs-literal">true</span> <span class="hljs-comment">// Enable Jetpack Compose</span>
        viewBinding <span class="hljs-literal">true</span>
        dataBinding <span class="hljs-literal">true</span>
    }
    composeOptions {
        kotlinCompilerExtensionVersion <span class="hljs-string">'1.5.10'</span> <span class="hljs-comment">// Or the latest version</span>
    }
    packagingOptions {
        resources {
            exclude <span class="hljs-string">'META-INF/LICENSE*'</span> <span class="hljs-comment">// Exclude license files to reduce APK size</span>
        }
    }
}

dependencies {
    def nav_version = <span class="hljs-string">"2.7.5"</span>
    def room_version = <span class="hljs-string">"2.6.1"</span>
    def lifecycle_version = <span class="hljs-string">"2.7.0"</span>

    <span class="hljs-comment">// Core KTX</span>
    implementation <span class="hljs-string">"androidx.core:core-ktx:1.12.0"</span>
    implementation <span class="hljs-string">"androidx.lifecycle:lifecycle-runtime-ktx:<span class="hljs-variable">$lifecycle_version</span>"</span>

    <span class="hljs-comment">// UI</span>
    implementation <span class="hljs-string">"androidx.appcompat:appcompat:1.6.1"</span>
    implementation <span class="hljs-string">"com.google.android.material:material:1.11.0"</span>
    implementation <span class="hljs-string">"androidx.constraintlayout:constraintlayout:2.1.4"</span>

    <span class="hljs-comment">// Jetpack Compose</span>
    implementation <span class="hljs-string">"androidx.activity:activity-compose:1.8.2"</span>
    implementation platform(<span class="hljs-string">"androidx.compose:compose-bom:2023.10.01"</span>)
    implementation <span class="hljs-string">"androidx.compose.ui:ui"</span>
    implementation <span class="hljs-string">"androidx.compose.ui:ui-graphics"</span>
    implementation <span class="hljs-string">"androidx.compose.ui:ui-tooling-preview"</span>
    implementation <span class="hljs-string">"androidx.compose.material3:material3"</span>
    debugImplementation <span class="hljs-string">"androidx.compose.ui:ui-tooling"</span>
    debugImplementation <span class="hljs-string">"androidx.compose.ui:ui-test-manifest"</span>

    <span class="hljs-comment">// Navigation</span>
    implementation <span class="hljs-string">"androidx.navigation:navigation-compose:<span class="hljs-variable">$nav_version</span>"</span>
    implementation <span class="hljs-string">"androidx.navigation:navigation-fragment-ktx:<span class="hljs-variable">$nav_version</span>"</span>
    implementation <span class="hljs-string">"androidx.navigation:navigation-ui-ktx:<span class="hljs-variable">$nav_version</span>"</span>

    <span class="hljs-comment">// Room</span>
    implementation <span class="hljs-string">"androidx.room:room-ktx:<span class="hljs-variable">$room_version</span>"</span>
    kapt <span class="hljs-string">"androidx.room:room-compiler:<span class="hljs-variable">$room_version</span>"</span>

    <span class="hljs-comment">// Networking (Retrofit)</span>
    implementation <span class="hljs-string">"com.squareup.retrofit2:retrofit:2.9.0"</span>
    implementation <span class="hljs-string">"com.squareup.retrofit2:converter-gson:2.9.0"</span> <span class="hljs-comment">// Gson converter</span>

    <span class="hljs-comment">// Dependency Injection (Dagger/Hilt) - Hilt</span>
    implementation <span class="hljs-string">"com.google.dagger:hilt-android:2.51"</span>
    kapt <span class="hljs-string">"com.google.dagger:hilt-compiler:2.51"</span>
    implementation <span class="hljs-string">"androidx.hilt:hilt-navigation-compose:1.1.0"</span>

    <span class="hljs-comment">// Testing</span>
    testImplementation <span class="hljs-string">"junit:junit:4.13.2"</span>
    androidTestImplementation <span class="hljs-string">"androidx.test.ext:junit:1.1.5"</span>
    androidTestImplementation <span class="hljs-string">"androidx.test.espresso:espresso-core:3.5.1"</span>
}
</code></pre>
<p>This files syntax has changed over years by a lot so I will not even go into explaining this file line by line but will try to point out only few pain points.</p>
<p><strong>Android Gradle Plugin aka AGP.</strong></p>
<p>You see gradle can be used as a build system for different type of projects. It is used for say SpringBoot projects as well. The build process for Android is different from say SpringBoot. It is the plugins that tell gradle how to actually build.</p>
<p>That one line actually is enabled gradle do a lot of tasks one by one to produce an APK in the end.</p>
<p>Note that the plugin is not actually downloaded right away. The gradle file specifies the plugin which is dynamically fetched from internet during the build process. What this means is that if the gradle plugin is updated somehwere your gradle system will fetch the latest version. if you want to prevent that you have to specify the specific version number but no one generally does that.</p>
<p>Gradle version and Android Plugin versions need to be compatible for them to work together well. But they are both developed independently and fetching logic is not smart enough to figure out which version of the plugin must be used. A lot of issues caused by this.</p>
<h3 id="heading-broader-issues-with-gradle-and-groovy">Broader issues with Gradle and groovy.</h3>
<p>Another issue I have with Gradle and groovy is that they use what appears to be a json like syntax. We generally think of json or xml as static content that needs to be understood as a whole. But Groovy is a scripting language. What you see is actually a script which is executed line by line. This creates the maximum level of confusion. The underlying mechanics of how Gradle works are often hidden by the DSL, making it difficult to grasp the build process intuitively.</p>
<p>There is a lot of stuff that is happening to build the android APK and you have no idea how the gradle file is achieving it. The Android build system is constantly evolving. Newer versions of Android Studio often come bundled with or recommend newer versions of Gradle and the Android Gradle Plugin. These updates can introduce breaking changes, deprecate old configurations, or enforce stricter rules that your old project's build files might not comply with.</p>
<h2 id="heading-solution">Solution ?</h2>
<p>I think things have gotten better but still not fast enough. It might be a good idea to simply hide the details of Gradle from average developers and move to more Docker like system where everything is frozen in time so old projects can continue to build even if you update your android studio. Perhaps Android Studio can simply take care of the build configuration through a much more simplistic UI.</p>
]]></description><link>https://androidauthority.dev/major-painpoints-of-android-development-as-a-side-activity-part-1-gradle</link><guid isPermaLink="true">https://androidauthority.dev/major-painpoints-of-android-development-as-a-side-activity-part-1-gradle</guid><category><![CDATA[Android]]></category><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Personalization using AI]]></title><description><![CDATA[<p>One of the simplest use case for generative AI has been how easily you can personalize your app behavior for the user. Only couple of years ago, personalization required some serious work. It required custom machine learning models and so on.</p>
<p>With Generative AI personalized experience has become incredibly simple. Consider the use cases.</p>
<h2 id="heading-personalizing-for-global-events">Personalizing for global events</h2>
<p>For example, how do you personalize your posts assuming it is mothers day today ? In past this required manual work where you had to feed your algorithm the fact that today might be mothers day and then tune the model to somehow prioritize the posts based on that fact.</p>
<p>With LLM models a lot of the global knowledge is part of the model and hence can very easily be incorporated. For example consider the prompt.</p>
<blockquote>
<p>If today is May 11th, what is the special about this date ? I am expecting one single answer to this question so that this answer can be used for picking up right sort of images to show to the users of a social media app. Give one sentence answer.</p>
<blockquote>
<p>Today, May 11th, 2025, is Mother's Day in the United States.</p>
</blockquote>
</blockquote>
<p>How hard or difficult do you think such a detection was in past ? Pretty hard for small time developers.</p>
<h2 id="heading-feed-personalization">Feed personalization</h2>
<p>Personalization required things like collaborative filtering in past. This was expensive and hard to build for smaller players like individual developers. With generative AI this has been a big shot in arm. It has become trivial for LLMs with large context windows to give a personalized experience in their app.</p>
<p>Given users history and other relevant information and available information, LLMs can in matter of seconds provide you a good ranking for content to show to the end user. This sort of feed personalization is very cheap and easy to implement.</p>
<h2 id="heading-dynamic-content-generation">Dynamic content generation</h2>
<p>Another big advantage is in generating high quality content and images for the app. In past this required significant human efforts. Not any more. Now AI can generate a lot of content to give excellent experience.</p>
<p>For example auto generated emails, banners and social media posts. It makes the app look more polished and personalized.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Small app developers will benefit immensely from new AI models. It will enable them to give a far superior experience to their users than what was otherwise possible with ease and low costs.</p>
<p>Please consult us if you want to know how your app can give a personalized experience to the user using generative AI.</p>
]]></description><link>https://androidauthority.dev/personalization-using-ai</link><guid isPermaLink="true">https://androidauthority.dev/personalization-using-ai</guid><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Android development to get private. Android would still be open sourced.]]></title><description><![CDATA[<p>Android operating system is open source under the project name AOSP (Android Open Source Project). However the development is primarily led by Google. Google internally develops Android and then patches their changes onto the external open sourced code.</p>
<p>However, this process is changing now. Google has decided that the development will happen privately inside Google and then google will just publish the code as open source.</p>
<p>This makes development process faster according to Google; as Android engineers dont have to spend time managing merge conflicts.</p>
<h2 id="heading-does-this-make-a-difference">Does this make a difference?</h2>
<p>This should not make any difference to android users or companies that make phones based on the OS. As Google has clarified that they remain committed to the open source nature of the product.</p>
<p>There is a good chance that Google will move a lot of Android functionality under the closed source GMS Core (Play Services) while keeping Android barebone. This is also expected.</p>
<p>Google clarifies that this shift does not alter the release cadence of new Android builds. Instead, the move aims to streamline the development process and mitigate potential conflicts arising from branch merging. <strong>The core principle of Android remaining an open-source platform is unchanged.</strong> New features and updates will still be released to AOSP, albeit <strong><em>after</em></strong> theyve been finalized in the internal branches. This change primarily affects the timing of when these updates become publicly accessible. [<a target="_blank" href="https://chromeunboxed.com/why-google-is-taking-android-development-private-starting-next-week/#google_vignette">source</a>]</p>
]]></description><link>https://androidauthority.dev/android-development-to-get-private-android-would-still-be-open-sourced</link><guid isPermaLink="true">https://androidauthority.dev/android-development-to-get-private-android-would-still-be-open-sourced</guid><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Google Pixel 9a is one of the best phones out there]]></title><description><![CDATA[<p><a target="_blank" href="https://store.google.com/product/pixel_9a?hl=en-US">Google store website now has a link to Google Pixel 9a phone</a>.</p>
<p>Googles a phone series is a bit cheaper phones. Personally I have liked these phones more than their flagship phones for the simple reason that they are lot cheaper, they are smaller so they fit in my pocket easily and they are just as good as their big brothers at the most important function, that is the camera. Of course, the pro phones are very good with photos but the a series does provides a high quality software enhanced photos.</p>
<p>Another thing that has bothered me with the Pixel series is the the visor with cameras. Phones have become thinner but for better camera lenses the laws of physics requires a certain distance between the lens and sensors. So for all major phones the camera lenses stick out of the phone.</p>
<p>Not for Pixel 9a. Here the camera is flush with the body which means you can place it on your table without any wobble.</p>
<h2 id="heading-battery">Battery</h2>
<p>One area where 9a beats all expectations is battery. This phone has the biggest battery of any pixel phone ever including 9 Pro. The Google Pixel 9a boasts a <strong><mark>5,100mAh</mark></strong> battery, which is the largest capacity ever fitted on a Pixel phone, and it supports 23W wired charging and 7.5W wireless charging. </p>
<p>This might give you 30% more battery timing than Pixel 9 pro.</p>
]]></description><link>https://androidauthority.dev/google-pixel-9a-is-one-of-the-best-phones-out-there</link><guid isPermaLink="true">https://androidauthority.dev/google-pixel-9a-is-one-of-the-best-phones-out-there</guid><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item><item><title><![CDATA[The Android Development Ecosystem in 2025: A Deep Dive into New Features and Trends]]></title><description><![CDATA[<p>The Android development landscape is in a state of constant evolution, and the year 2025 promises to bring significant advancements across various facets of the ecosystem. This report provides a detailed analysis of the expected changes, drawing upon the latest announcements and insights to equip Android developers with the knowledge necessary to navigate the future of the platform. Key areas of focus include the newest iteration of the Android operating system, advancements in developer tooling, the evolution of Jetpack libraries, emerging architectural patterns, the role of cross-platform development, and progress in critical domains such as on-device artificial intelligence, security, and performance optimization. Staying abreast of these developments is crucial for developers to maintain a competitive edge and build innovative, high-quality Android applications.</p>
<h2 id="heading-the-evolution-of-the-android-platform-whats-new-in-android-16">The Evolution of the Android Platform: What's New in Android 16</h2>
<p>The next major release of Android, version 16, codenamed "Baklava," has been under active development, with multiple beta releases providing developers with early access to its new features and APIs . The timeline for Android 16 indicates a planned launch in the second quarter of 2025, following a series of beta releases throughout the early months of the year . Notably, Google has announced a shift towards a more frequent release cadence, with two Android API releases scheduled for 2025: a major release in Q2 and a minor release in Q4 . This indicates a strategic move to accelerate innovation and deliver new functionalities and improvements to developers at a faster pace. The primary behavioral changes affecting applications are expected to be included in the Q2 major release, while the Q4 minor release will focus on feature updates, optimizations, and bug fixes without introducing intentional app-breaking changes . This new release strategy suggests a commitment to continuous improvement and responsiveness to the evolving needs of the Android ecosystem.  </p>
<p>Android 16 introduces a range of developer-facing features and API changes designed to enhance app capabilities and user experience. One significant addition is the expansion of the Linux Terminal feature, initially introduced in Android 15, which now allows users to run Linux applications within a virtual machine on their devices . This functionality leverages the Android Virtualization Framework (AVF) to create an isolated Debian-based environment where users can execute Linux commands and graphical applications, even showcasing the ability to run classic desktop software . This enhancement could empower developers with familiar command-line tools directly on their Android devices, potentially streamlining certain development and debugging tasks.  </p>
<p>The embedded photo picker has also received notable improvements, including support for cloud-based media services like Google Photos . This allows users to seamlessly select photos stored in their cloud accounts without needing to switch between different applications. Furthermore, the picker now integrates cloud albums alongside local content and exhibits enhanced responsiveness to configuration changes such as screen orientation or theme switching . These updates aim to provide a more unified and user-friendly experience for media selection within Android applications.  </p>
<p>In the realm of health and fitness, Android 16 introduces enhanced functionality within Health Connect, enabling applications to access and manage medical data through a new set of APIs . The initial focus of this feature is on supporting the writing of medical records in the Fast Healthcare Interoperability Resources (FHIR) format, a standardized method for managing electronic health records across different healthcare systems . The initial developer preview includes support for immunization records, with plans to expand to other data types like lab results and medications . Applications can interact with this data using specific permissions, always requiring explicit user consent . This development signifies a growing emphasis on integrating health and wellness data within the Android platform, potentially fostering innovation in health-related applications.  </p>
<p>For audio experiences, Android 16 incorporates Bluetooth LE Audio's Auracast technology, allowing users to stream audio to multiple Bluetooth devices simultaneously, such as headphones or speakers, without complex pairing processes . This feature requires both the source device and the receiving devices to support Bluetooth LE Audio . The introduction of Auracast opens up new possibilities for audio sharing, including public broadcasts in venues and personalized listening experiences with multiple audio devices.  </p>
<p>To enhance user engagement, Android 16 introduces progress-centric notifications through a new <code>Notification.ProgressStyle</code> . This notification style allows developers to create notifications that visually track user-initiated, start-to-end journeys, making it particularly useful for applications such as ridesharing, delivery services, and navigation . By denoting states and milestones within the notification, applications can provide users with more informative and engaging updates on ongoing processes.  </p>
<p>Navigation within applications is also set to become more intuitive with the introduction of predictive back updates . Android 16 adds new APIs to help developers enable predictive back system animations in gesture navigation, such as the animation that occurs when navigating back to the home screen . Registering the <code>onBackInvokedCallback</code> with the <code>PRIORITY_SYSTEM_NAVIGATION_OBSERVER</code> allows applications to receive the regular back invocation while the system handles the animation, ensuring a smoother and more visually coherent back navigation experience .  </p>
<p>For professional videography, Android 16 introduces new camera APIs, including hybrid auto-exposure modes and precise control over color temperature and tint . These enhancements cater to the needs of professional video recording applications, providing finer control over image capture parameters. The platform also adds standard Intent actions for motion photo capture, along with support for UltraHDR image enhancements .  </p>
<p>In terms of internationalization, Android 16 adds low-level support for rendering and measuring text vertically . This foundational support is primarily intended for library developers to build upon, enabling better support for languages that utilize vertical writing systems. Additionally, users will have the ability to customize their measurement system in regional preferences within the Settings app , providing a more personalized experience.  </p>
<p>Security is a paramount concern, and Android 16 introduces several enhancements in this area. The Privacy Sandbox on Android continues to evolve, aiming to limit tracking mechanisms by utilizing anonymized data and local processing to deliver personalized content without compromising user privacy . Furthermore, Android 16 implements stronger security measures against Intent redirection attacks, requiring developers to thoroughly test their Intent handling . A new Local Network Protection (LNP) feature is being introduced, which will give users more control over which apps can access devices on their local network . Additionally, supported devices with Wi-Fi 6 802.11az will benefit from robust security features in Wi-Fi location, including AES-256 encryption and protection against man-in-the-middle (MITM) attacks .  </p>
<p>Performance and battery efficiency are also key areas of focus in Android 16. The latest updates to the Android Runtime (ART) improve performance and provide support for additional Java features, with these improvements also being made available to devices running Android 12 and higher through Google Play System updates . Adjustments have been made to the regular and expedited job execution runtime quota based on factors such as the app standby bucket and whether the app is in the foreground . The Android Dynamic Performance Framework (ADPF) introduces Headroom APIs , which are expected to provide developers with more control over performance and power management, although specific details were not available in the provided snippets. System-triggered profiling is also introduced to <code>ProfilingManager</code>, allowing applications to register interest in receiving traces for specific events like cold starts or Application Not Responding (ANR) errors , which can aid in diagnosing and resolving performance issues.  </p>
<p>Accessibility remains a crucial aspect of the Android platform, and Android 16 includes several enhancements. Outline text is introduced as a replacement for high contrast text, significantly improving legibility for users with low vision . New <code>AccessibilityManager</code> APIs allow applications to check or register a listener to see if this mode is enabled . Additionally, there are general improvements to accessibility APIs , including support for supplemental descriptions, required form fields, elements with multiple labels, expandable elements, indeterminate <code>ProgressBars</code>, and tri-state <code>CheckBoxes</code> . Android 16 also introduces the capability to use the phone as a microphone input for voice calls with LE Audio hearing aids and provides ambient volume controls for these hearing aids . Furthermore, disruptive accessibility announcements made using <code>announceForAccessibility</code> or dispatching <code>TYPE_ANNOUNCEMENT</code> events are being deprecated , encouraging developers to use more user-friendly alternatives.  </p>
<p>To provide a clearer overview of the Android 16 development cycle, the following table summarizes the key release phases and their approximate dates based on the provided snippets:</p>
<p><strong>Table 1: Android 16 Release Timeline</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Release Phase</td><td>Date</td><td>Key Features/Updates</td></tr>
</thead>
<tbody>
<tr>
<td>Developer Preview</td><td>December 12, 2024</td><td>Early look at the next version of Android for testing and feedback</td></tr>
<tr>
<td>Beta 1</td><td>January 23, 2025</td><td>First beta release, open to developers and early adopters</td></tr>
<tr>
<td>Beta 2</td><td>February 13, 2025</td><td>Second beta release with new features for camera experiences and graphical effects</td></tr>
<tr>
<td>Beta 3</td><td>March 13, 2025</td><td>Platform Stability achieved; API surface locked; app-facing behaviors finalized</td></tr>
<tr>
<td>Final Release</td><td>Planned Q2 2025</td><td>Official launch of Android 16</td></tr>
<tr>
<td>Minor Release</td><td>Planned Q4 2025</td><td>Feature updates, optimizations, and bug fixes without app-breaking changes</td></tr>
</tbody>
</table>
</div><p>The following table provides a summary of the key new features in Android 16 that are particularly relevant for developers:</p>
<p><strong>Table 2: Key New Features in Android 16</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature Name</td><td>Description</td><td>Relevant Snippet IDs</td></tr>
</thead>
<tbody>
<tr>
<td>Linux Terminal Expansion</td><td>Allows running Linux applications in a virtual machine using AVF.</td><td></td></tr>
<tr>
<td>Embedded Photo Picker Improvements</td><td>Enhanced with cloud service support and better responsiveness.</td><td></td></tr>
<tr>
<td>Health Records via Health Connect</td><td>APIs for accessing and managing medical data in FHIR format.</td><td></td></tr>
<tr>
<td>Audio Sharing with Auracast</td><td>Streams audio to multiple Bluetooth LE devices simultaneously.</td><td></td></tr>
<tr>
<td>Progress-Centric Notifications</td><td>New style for tracking user journeys in notifications.</td><td></td></tr>
<tr>
<td>Predictive Back Updates</td><td>APIs for enabling smoother back navigation animations.</td><td></td></tr>
<tr>
<td>Hybrid Auto-Exposure &amp; Precise Color Adjustments</td><td>Advanced camera controls for professional video.</td><td></td></tr>
<tr>
<td>Vertical Text Support</td><td>Low-level support for rendering text vertically.</td><td></td></tr>
<tr>
<td>Measurement System Customization</td><td>Users can choose their preferred measurement system.</td><td></td></tr>
<tr>
<td>Key Sharing API</td><td>APIs for securely sharing Android Keystore keys between apps.</td><td></td></tr>
<tr>
<td>Privacy Sandbox Enhancements</td><td>Continued development of privacy-preserving advertising technologies.</td><td></td></tr>
<tr>
<td>Improved Intent Redirection Security</td><td>Stronger protection against Intent redirection attacks.</td><td></td></tr>
<tr>
<td>Local Network Protection (LNP)</td><td>User control over app access to local network devices.</td><td></td></tr>
<tr>
<td>Wi-Fi Location Security</td><td>Enhanced security for Wi-Fi-based location services.</td><td></td></tr>
<tr>
<td>ART Performance Enhancements</td><td>Ongoing improvements to the Android Runtime.</td><td></td></tr>
<tr>
<td>JobScheduler Quota Adjustments</td><td>Optimized management of background task execution.</td><td></td></tr>
<tr>
<td>Headroom APIs in ADPF</td><td>New APIs for fine-grained performance and power management.</td><td></td></tr>
<tr>
<td>System-Triggered Profiling</td><td>Allows apps to register for traces on specific system events.</td><td></td></tr>
<tr>
<td>Outline Text for Contrast</td><td>Improves text legibility for users with low vision.</td><td></td></tr>
<tr>
<td>Improved Accessibility APIs</td><td>Various new APIs for enhancing app accessibility.</td><td></td></tr>
<tr>
<td>Phone as Microphone for LE Audio Hearing Aids</td><td>Enhances communication for hearing aid users.</td><td></td></tr>
<tr>
<td>Ambient Volume Controls for LE Audio Hearing Aids</td><td>Provides better control over listening environment.</td><td></td></tr>
<tr>
<td>Deprecating Disruptive Accessibility Announcements</td><td>Encourages more user-friendly accessibility feedback.</td></tr>
</tbody>
</table>
</div><h2 id="heading-boosting-productivity-updates-to-android-development-tools">Boosting Productivity: Updates to Android Development Tools</h2>
<p>Android Studio, the official Integrated Development Environment (IDE) for Android development, continues to receive updates aimed at enhancing developer productivity. The latest version in the Canary channel is Android Studio Narwhal (2025.1.1), indicating ongoing development and the introduction of new features . A significant enhancement in Narwhal is the deeper integration of Gemini, Google's AI assistant, which now supports multimodal image attachments . This allows developers to attach images directly to their Gemini prompts within Android Studio, enabling them to gain instant insights on complex technical diagrams or use design mockups to generate corresponding code skeletons . This capability has the potential to significantly improve efficiency by bridging the gap between visual specifications and code implementation.  </p>
<p>To further streamline the interaction with Gemini, Android Studio Narwhal introduces a Prompt Library feature . This allows developers to save and manage frequently used prompts, accessible through the IDE settings or by right-clicking within the chat window . Saved prompts can be easily applied by right-clicking in the Editor and navigating to the Gemini menu, eliminating the need to retype common queries and saving valuable development time .  </p>
<p>Testing and development workflows are also being enhanced with the introduction of app backup and restore functionality directly within Android Studio . This feature allows developers to generate backups of their app data in various forms (device-to-device, cloud, or unencrypted cloud) and restore them to other devices . This capability simplifies testing scenarios that involve data persistence, such as migrating data between app versions or comparing app behavior across different devices.  </p>
<p>In line with the emerging landscape of immersive experiences, Android Studio now offers XR support , aligning with the Android XR Developer Preview . This indicates a growing focus on enabling developers to create applications for augmented and virtual reality platforms within the Android ecosystem.  </p>
<p>For developers utilizing Jetpack Compose, Android Studio Narwhal provides a tool to generate previews of composables using Gemini . This feature allows developers to right-click on a composable function and instruct Gemini to generate a preview, either for a specific composable or for all composables within a file . This can significantly accelerate UI development by providing quick visualizations and facilitating experimentation with different UI elements and layouts.  </p>
<p>Themed app icons are an integral part of the Android user experience, and Android Studio Narwhal includes a tool to preview how app icons will appear with themed icons . Even if a custom monochromatic layer for the icon hasn't been provided, developers can use this preview tool to get an idea of how their icon will look and identify any potential color contrast issues . This helps ensure visual consistency and adherence to the Material You design guidelines.  </p>
<p>Starting with the Meerkat Feature Drop Canary 2, Android Studio is now using the same user configurations across canary, beta, and stable releases . This change simplifies the management of different Android Studio versions for developers who may work with multiple release channels.  </p>
<p>Interestingly, there has been a notable change with the removal of the "Clean Project" and "Rebuild Project" buttons from Android Studio . While this decision has generated some discussion within the developer community, it likely reflects an ongoing effort to optimize the build process within the IDE, potentially making these full rebuild operations less frequently necessary in typical development workflows.  </p>
<p>The integration of AI, particularly through Gemini, is becoming increasingly central to the Android development workflow . Beyond the features already mentioned, Gemini in Android Studio offers capabilities such as code generation, finding relevant resources, suggesting best practices, and troubleshooting errors . The introduction of an Image-to-Code multimodality feature allows Gemini to understand image attachments and generate code based on them . Furthermore, the availability of a Gemini API starter template streamlines the setup of new AI-powered projects within Android Studio . These advancements signify a profound shift towards AI-assisted development, with the potential to significantly enhance developer productivity and accelerate the application development lifecycle.  </p>
<p>Advancements in testing and debugging capabilities also continue to be a focus. The Compose Preview Screenshot Testing tool, initially introduced in Android Studio Ladybug (2024.2.2) and likely seeing further improvements in Narwhal, allows developers to test their Jetpack Compose UI and prevent regressions by generating HTML reports that visually highlight any changes . The "Test and Develop with App Backup and Restore" feature, as previously discussed, also aids in testing data-related functionalities . Additionally, ViewModelScenario simplifies the process of unit testing ViewModels by managing their lifecycle and state, making it easier for developers to verify the behavior of their state management logic .  </p>
<p>To provide a concise overview of the key features introduced in Android Studio Narwhal, the following table is presented:</p>
<p><strong>Table 3: Android Studio Narwhal Key Features</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature Name</td><td>Description</td><td>Relevant Snippet IDs</td></tr>
</thead>
<tbody>
<tr>
<td>Gemini Integration with Multimodal Image Attachments</td><td>Attach images to Gemini prompts for code generation and insights.</td><td></td></tr>
<tr>
<td>Prompt Library for Gemini</td><td>Save and manage frequently used Gemini prompts.</td><td></td></tr>
<tr>
<td>Test and Develop with App Backup and Restore</td><td>Generate and restore app backups directly within the IDE.</td><td></td></tr>
<tr>
<td>Android Studio XR Support</td><td>Enables development for augmented and virtual reality platforms.</td><td></td></tr>
<tr>
<td>Generate Previews of Composables using Gemini</td><td>Use AI to generate previews of Jetpack Compose UI.</td><td></td></tr>
<tr>
<td>Themed App Icon Preview</td><td>Preview how app icons will look with Android's theming system.</td><td></td></tr>
<tr>
<td>Unified Configuration Directories</td><td>Shared user configurations across different Android Studio release channels.</td></tr>
</tbody>
</table>
</div><h2 id="heading-modern-ui-development-the-rise-of-jetpack-compose-and-libraries">Modern UI Development: The Rise of Jetpack Compose and Libraries</h2>
<p>Jetpack Compose has rapidly matured and by 2025 has become a cornerstone for modern Android UI development . Its declarative approach simplifies UI building, making it faster, easier, and more flexible . Ongoing optimizations in Jetpack Compose 2.0 are expected to further enhance UI rendering speed and smoothness . Moreover, there is a focus on enabling easier integration with other platforms, particularly supporting shared UI code across Android, iOS, and even the web, largely through the advancements in Compose Multiplatform .  </p>
<p>For Compose Multiplatform, a significant objective is to achieve feature parity with other platforms, including implementing drag-and-drop support, improving text input and rendering capabilities, and ensuring seamless interoperability with HTML content . Stabilizing the interoperability between Compose and native Android views also remains a key priority . These efforts indicate a strong push towards making Jetpack Compose a truly versatile cross-platform UI framework.  </p>
<p>The Android Jetpack suite of libraries continues to evolve, with frequent updates across various components . The Compose libraries, including animation, foundation, material, material3, runtime, and UI, receive regular updates, signifying active development and ongoing improvements . Libraries like Health Connect are also under active development, with beta releases indicating progress in providing standardized APIs for health and fitness data . Media3, the unified API for media playback, and Leanback, for building Android TV applications, also see continued updates . The potential introduction of new libraries or significant updates to existing ones, such as <a target="_blank" href="http://androidx.health"><code>androidx.health</code></a><code>.connect</code> reaching beta status, further enriches the Jetpack ecosystem . These continuous updates underscore Google's commitment to providing developers with a robust and up-to-date toolkit for building high-quality Android applications.  </p>
<p>Best practices in modern Android development increasingly revolve around leveraging the Android Jetpack libraries. Adopting Jetpack Compose for UI development is highly recommended due to its declarative nature and the productivity gains it offers . Utilizing Architecture Components like ViewModel, Room, and WorkManager is crucial for building well-structured and robust applications, although LiveData is becoming less prominent with the rise of Compose and Kotlin's Flow . For projects still using Fragments, the Navigation Component remains a valuable tool for managing in-app navigation . DataStore offers a more modern and safer approach to data persistence compared to SharedPreferences , while WorkManager provides an efficient way to manage background tasks . For dependency injection, integrating Hilt can significantly improve the scalability and maintainability of larger applications . The consistent application of these Jetpack components is fundamental to building maintainable, scalable, and performant Android applications in 2025.  </p>
<h2 id="heading-architectural-trends-shaping-android-applications">Architectural Trends Shaping Android Applications</h2>
<p>The landscape of Android application architecture continues to evolve, with a growing emphasis on adopting modern and scalable patterns. MVVM (Model-View-ViewModel) remains a widely adopted architectural pattern for its effectiveness in separating UI logic from business logic . MVI (Model-View-Intent) is also gaining traction as a reactive pattern characterized by its unidirectional data flow . The concept of Composable Architecture, which involves building modular and reusable components that are loosely coupled and platform-agnostic, aligns particularly well with the principles of Jetpack Compose . While more prevalent in backend systems, the trend towards microservices, involving the construction of modular and independent components, may also influence mobile app architecture . Additionally, Event-Driven Architecture, where components react to state changes by emitting and listening to events, offers benefits in terms of scalability, flexibility, and performance, especially in applications requiring real-time engagement . These trends collectively indicate a move towards more sophisticated and adaptable architectural designs in Android development.  </p>
<p>The emergence and increasing adoption of declarative UI frameworks like Jetpack Compose are significantly impacting application architecture. Compose's declarative nature naturally encourages a shift in architectural thinking, often leading to simpler and more reactive UI layers . It inherently promotes unidirectional data flow, making state management and UI updates more predictable and easier to reason about . State hoisting, a key concept within Compose, plays a crucial role in managing state effectively and promoting component reusability . The influence of Compose extends beyond just UI rendering, shaping how developers structure their applications and manage the flow of data.  </p>
<p>Effective state management remains a critical aspect of building complex and reactive Android applications. ViewModel, often used in conjunction with StateFlow or LiveData, continues to be a common approach for managing UI-related data, particularly when working with Jetpack Compose . Compose itself provides built-in state management mechanisms through the use of <code>remember</code> and <code>mutableStateOf</code> for managing local UI state within composable functions . While the provided snippets primarily discuss state management solutions in the context of React , the underlying principles and patterns of libraries like Redux, Zustand, and Recoil, which address challenges in managing complex application state, might inspire similar solutions or adoption patterns within the Android/Kotlin ecosystem. Furthermore, popular state management libraries in Flutter, such as Riverpod, GetX, BLoC, Provider, and MobX , could also have parallels or influence in Android development, especially in the context of Kotlin Multiplatform projects where cross-platform state management becomes relevant. The evolving landscape suggests that developers will continue to explore and adopt more sophisticated state management solutions to handle the increasing complexity of modern Android applications.  </p>
<h2 id="heading-cross-platform-influence-flutter-and-kotlin-multiplatform-in-android">Cross-Platform Influence: Flutter and Kotlin Multiplatform in Android</h2>
<p>Flutter continues to exert a significant influence on the Android development ecosystem as a leading framework for cross-platform app development . Known for its ability to create natively compiled applications from a single codebase for mobile, web, and desktop, Flutter aims to deliver near-native performance . Its key benefits include the efficiency of a single codebase for both Android and iOS platforms, the rapid development facilitated by features like hot reload, and a vibrant and growing ecosystem of packages and plugins . Flutter's capabilities are also expanding beyond mobile, with increasing support for web and desktop application development . Furthermore, there is a growing emphasis on integrating artificial intelligence and machine learning functionalities within Flutter applications . Its strong performance and rapid development cycle make Flutter a compelling choice for projects requiring broad platform reach and visually appealing user interfaces.  </p>
<p>Kotlin Multiplatform (KMP) is emerging as another significant player in the cross-platform development landscape, particularly within the Android ecosystem . KMP allows developers to write their application's business logic once in Kotlin and then share it across multiple platforms, while still allowing for platform-specific UI implementations . JetBrains, the creator of Kotlin, is actively investing in enhancing the KMP ecosystem. Key initiatives include making Compose Multiplatform for iOS stable, releasing a direct Kotlin-to-Swift export, and developing an all-in-one IDE specifically for KMP development . There is a strong focus on achieving feature parity for Jetpack Compose on iOS within Compose Multiplatform . Efforts are also underway to improve the tooling and overall development experience for KMP, aiming for a seamless workflow across different platforms . The already high adoption rate of Kotlin as the primary language for Android development, with over 90% of Android developers now using it , provides a strong foundation for the further adoption of KMP. For teams already invested in the Kotlin ecosystem, KMP offers a natural path to cross-platform development, particularly for sharing core application logic while maintaining native user interfaces.  </p>
<p>When considering a cross-platform development strategy, several factors come into play. Project requirements, such as the complexity of the UI, the need for high performance, and the necessity of accessing specific native device features, should be carefully evaluated . The expertise and familiarity of the development team with a particular framework are also crucial considerations. Flutter excels in rapid UI development and delivering a consistent user interface across platforms. KMP offers more flexibility in UI implementation by allowing for native UIs on each platform, while providing significant code sharing for the underlying business logic. Hybrid approaches, which combine native and cross-platform components, might also be suitable for certain projects . Ultimately, the choice of the right cross-platform strategy depends on a careful assessment of the project's specific needs and the development team's capabilities.  </p>
<p><strong>Table 4: Comparison of Cross-Platform Frameworks</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Flutter</td><td>Kotlin Multiplatform</td></tr>
</thead>
<tbody>
<tr>
<td>UI Development</td><td>Single codebase for UI, rich set of customizable widgets.</td><td>Platform-specific UI (e.g., Jetpack Compose on Android, SwiftUI on iOS).</td></tr>
<tr>
<td>Code Sharing</td><td>High degree of code sharing for both UI and business logic.</td><td>Primarily focuses on sharing business logic; UI is typically platform-specific.</td></tr>
<tr>
<td>Performance</td><td>Aims for near-native performance through its own rendering engine.</td><td>Native performance as UI is built with platform-specific tools. Shared logic runs natively on each platform.</td></tr>
<tr>
<td>Development Speed</td><td>Generally fast development due to hot reload and a comprehensive widget library.</td><td>Development speed for shared logic can be high; UI development speed depends on native platform tools.</td></tr>
<tr>
<td>Ecosystem</td><td>Large and growing ecosystem of packages and plugins.</td><td>Growing ecosystem, especially around Kotlin libraries and multiplatform support.</td></tr>
<tr>
<td>Platform Support</td><td>Android, iOS, Web, Desktop, Embedded.</td><td>Android, iOS, Web (via Kotlin/JS), Desktop (via Compose Multiplatform).</td></tr>
</tbody>
</table>
</div><p>Export to Sheets</p>
<h2 id="heading-emerging-technologies-aiml-security-and-performance">Emerging Technologies: AI/ML, Security, and Performance</h2>
<p>The Android development ecosystem in 2025 is witnessing an increasing integration of emerging technologies, particularly in the areas of artificial intelligence and machine learning, security, and performance optimization. AI and ML are being incorporated into Android applications to deliver hyper-personalization, automate tasks, and enable features such as intelligent voice assistants and advanced image recognition . There is a growing trend towards on-device machine learning, which offers advantages in terms of user privacy, application speed, and the ability to function offline . Google's Gemini Nano is specifically designed for efficient on-device AI tasks on Android devices . ML Kit provides developers with production-ready solutions for common ML tasks, simplifying the integration of these capabilities into their applications . Advancements in ML model scaling, knowledge distillation, and quantization techniques are making it possible to create smaller and more efficient on-device models, expanding the potential applications of AI/ML in mobile development .  </p>
<p>Security remains a paramount concern in Android development, with an increasing focus on protecting user data and ensuring application integrity . There is a growing emphasis on compliance with stringent security regulations from financial institutions and other regulatory bodies . Android 16 introduces enhanced privacy features, such as more granular app permissions and real-time alerts for data access . The threat landscape continues to evolve, necessitating robust security measures to protect against a diverse range of cyber threats . Secure management of API keys and the adoption of a zero-trust security approach are becoming increasingly important . Furthermore, there is increased scrutiny of open-source dependencies, and runtime protection of applications is becoming a critical layer of defense .  </p>
<p>Optimizing Android app performance is an ongoing endeavor, crucial for ensuring a positive user experience and high user retention . Key metrics for measuring app performance include CPU usage, memory consumption, frame rate, and network performance . Effective optimization strategies involve refactoring inefficient code, using device resources wisely, optimizing network performance through techniques like caching and compression, and leveraging Android's built-in profiling tools . Reducing app size through the use of App Bundles and code minification is also essential . Utilizing WorkManager for efficient management of background tasks , improving UI performance with Jetpack Compose and the Layout Inspector , minimizing activity leaks, and optimizing memory usage are all critical aspects of ensuring smooth and efficient Android applications .  </p>
<h2 id="heading-community-insights-and-the-future-outlook">Community Insights and the Future Outlook</h2>
<p>The Android developer community plays a vital role in shaping the future of the platform, and discussions within the community often reflect emerging trends and areas of interest. There is significant discussion around the new features and developer previews of Android 16 . The adoption and best practices for using Jetpack Compose are also frequent topics of conversation . The integration of AI into development tools, such as Gemini in Android Studio, has also sparked considerable interest . Debates around the merits of native versus cross-platform development continue within the community . Furthermore, there is a consistent desire for improvements in UI/UX, greater customization options, and enhanced performance within the Android operating system itself . These community insights highlight the key areas of focus and the ongoing dialogue that drives innovation within the Android development ecosystem.  </p>
<p>Looking ahead to 2025, Android developers will encounter both challenges and opportunities. The rapid pace of platform updates, with two Android API releases in the year, will require developers to be agile in adopting new features and ensuring compatibility . Migrating existing codebases to modern UI frameworks like Jetpack Compose will continue to be a significant undertaking for many. Adapting to new privacy regulations and the implementation of the Privacy Sandbox will necessitate changes in how applications handle user data and advertising. Mastering emerging technologies such as on-device AI/ML and Android XR will open up new possibilities but will also require learning new skills and tools. Ensuring robust app security in the face of evolving cyber threats will remain a critical challenge. Finally, optimizing applications for an increasingly diverse range of devices and form factors, including foldables, tablets, wearables, and TVs, will be essential for reaching a broad user base.  </p>
<p>Despite these challenges, 2025 presents numerous opportunities for Android developers. The new features in Android 16 provide a foundation for creating innovative and engaging applications. Enhanced APIs offer the potential to improve user engagement and accessibility. AI-powered development tools promise to boost productivity and streamline workflows. Cross-platform frameworks like Flutter and Kotlin Multiplatform enable developers to reach wider audiences with their applications. The advancements in on-device AI/ML allow for the creation of more intelligent and personalized user experiences. Finally, the growing emphasis on security and privacy creates an opportunity for developers to build trust with their users by developing secure and privacy-focused applications.</p>
<h2 id="heading-conclusion-navigating-the-future-of-android-development">Conclusion: Navigating the Future of Android Development</h2>
<p>The Android development ecosystem in 2025 is characterized by rapid innovation and significant advancements across the platform, developer tools, and key libraries. Android 16 introduces a plethora of new features focused on enhancing user experience, security, performance, and accessibility. The deep integration of AI into Android Studio promises to revolutionize development workflows and boost productivity. Jetpack Compose continues its ascent as the standard for modern Android UI development, with ongoing improvements and expanding cross-platform capabilities. Emerging technologies like on-device AI/ML offer exciting possibilities for creating intelligent and personalized applications, while the increasing focus on security underscores the importance of building trustworthy and privacy-respecting apps.</p>
<p>To thrive in this evolving landscape, Android developers must remain informed about these changes, embrace new technologies, and prioritize security, performance, and accessibility in their applications. The future of Android development is dynamic and full of potential, and developers who proactively adapt to these trends will be well-positioned to shape the next generation of mobile experiences.</p>
]]></description><link>https://androidauthority.dev/the-android-development-ecosystem-in-2025-a-deep-dive-into-new-features-and-trends</link><guid isPermaLink="true">https://androidauthority.dev/the-android-development-ecosystem-in-2025-a-deep-dive-into-new-features-and-trends</guid><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Top 5 AI-Powered Features to Implement in Your Android App in 2025]]></title><description><![CDATA[<p>Everyone is building AI powered apps these days and this is a blog post that ideates about some things you can build. Let us know what you think in comments.</p>
<h2 id="heading-advanced-natural-language-processing">Advanced Natural Language Processing</h2>
<p>Ultimately LLMs are about language and natural language processing. There is so much potential now to build apps that use natural language as the primary interface. Even your existing apps can be turned into one. Natural language processing means users could interact with your app not just by voice an chat but they could also use say text messages to get things done.</p>
<h2 id="heading-personalized-user-experience">Personalized User Experience</h2>
<p>Personalization has always been around as next big thing but with generative AI there is an increasing demand for even more personalization. Personalization is in some sense an endless road without destination but AI takes it to next level.</p>
<p>Not only content can be personalized but entire app can be personalized to based on users very specific mindset. UI, Features, Social features etc. can all adapt to the users personal experience. ChatGPT and others have already demonstrated how you can build a proper context of users desires and then accordingly provides much more tailored response.</p>
<h2 id="heading-image-recognition">Image recognition</h2>
<p>Image models have become increasingly capable. They can not just generate images but extract very complex metadata from images and videos. This was possible earlier only with extremely complex engineering but now it has been as simple as a REST call. The new generation of apps would increasingly use images. Take a photo and search the web, take a photo and place and order. Take a photo of car and detect a good price to sell it for and so on.</p>
<p>The possibilities are endless.</p>
<h2 id="heading-super-apps">Super Apps</h2>
<p>Super apps is an emerging trend. An app that does a lot. It can handle your banking needs to shopping, from maps to not taking. These apps will be build as a lego blocks of various features that adapt to the users need and context. There is a good chance some super-apps might simple take over all the needs of the user.</p>
<h2 id="heading-agentic-apps">Agentic Apps</h2>
<p>Imagine pressing a button that says Hire Bob, who can book concert tickets for me. Bob gets hired, he gets a phone number, a credit card, a persona and knows you like your spouse. Bob when asked to monitors the web for concert tickets, negotiates when he must, look for deals and buys the tickets for you. After the job is done you fire him without remorse. Bob could be someone from India or could just be a pure AI bot.</p>
<p><strong>This will be revolutionary.</strong></p>
]]></description><link>https://androidauthority.dev/top-5-ai-powered-features-to-implement-in-your-android-app-in-2025</link><guid isPermaLink="true">https://androidauthority.dev/top-5-ai-powered-features-to-implement-in-your-android-app-in-2025</guid><category><![CDATA[Android]]></category><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item><item><title><![CDATA[Google Meet can now create post-meeting to-do list]]></title><description><![CDATA[<p>Google has been making rapid progress in integrating its AI solutions into all their products. Google Gemini is now everywhere including docs, map and search. Gemini is a very good multi-modal AI model and is able to process text, audio, video and images. This makes Gemini distinctly better than other models which are often either image or text based models but not seamlessly multimodal like Google Gemini.</p>
<p>Google is cleverly utilizing its existing suite of products to push AI usage and getting their users hooked to AI powered productivity assist. This not only establishes Google as major AI player, it also unlocks more value for their users allowing them to increase revenue through these products.</p>
<p>Google meet is very popular video conferencing system that competes with Zoom, Teams and Slack. But it is more tightly integrated into Googles suite of cloud products and requires you to have a Google ID to use it. This could either be the consumer GMAIL account or a corp account obtained from Google.</p>
<p>Google launches the <a target="_blank" href="https://support.google.com/meet/answer/14754931?hl=en">meeting summary feature some time ago</a>. Googles AI listens to your meetings and takes notes based on who said what. I have tested this feature and it was pretty darn good for most of the meetings such as say stand ups, status update meetings etc.</p>
<p>Looks like Google decided to take an extra step now and from the <a target="_blank" href="https://www.forbes.com/sites/dimitarmixmihov/2025/02/18/google-meets-note-taking-ai-can-now-suggest-to-dos-but-there-is-a-catch/">meeting notes it has moved on to deciding action items for the your post-meeting work</a>. This is a pretty good feature if you ask me as most meetings are about deciding what to do next.</p>
<p>However this feature is only for Googles workspace users and is not available to consumer users. This also makes sense given that most meetings happen in corporate offices.</p>
<p>This move benefits Google in multiple ways. Once Google Meet becomes a critical part of your day to day business it is hard to move away from it. This allows Google to improve both customer satisfaction and hence retention in the long run. It also demonstrates Googles ability to put AI to more productive use where it actually saves time and makes money for Google and their customers.</p>
<p>Microsofts CEO Satya recently said that AGI is not happening and value of AI is truly in measuring how much benefit it adds to economy in a measurable way than some distant futuristic dream of AGI and the profits are going to come from the app layer which adds productivity to humans and their work. Google has demonstrated how well this can actually be achieved and soon all the competitors are likely to move in that direction.</p>
]]></description><link>https://androidauthority.dev/google-meet-can-now-create-post-meeting-to-do-list</link><guid isPermaLink="true">https://androidauthority.dev/google-meet-can-now-create-post-meeting-to-do-list</guid><category><![CDATA[AI]]></category><category><![CDATA[Google Meet]]></category><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item><item><title><![CDATA[Kotlin multi-platform development]]></title><description><![CDATA[<p>Kotlin as a programming language seems to be gaining a lot of popularity and Googles adoption of this language seems to have paid off both for Google and the community at large. Kotlins syntax has been extremely good for App development and resembles Dart and Swift a lot. This makes the code readable and easier to develop. It overcame many of Javas shortcomings such as verbose syntax, reliance on third party libraries for lot of common patterns, lack of null safety, lack of smart typecasting while retaining the best parts of Java and most importantly being fully compatible with Java libraries. This emergence of Kotlin is not paving way for even larger adoption.</p>
<h2 id="heading-multiplatform-development">Multiplatform development</h2>
<p>Android and iOS have become two major operating systems. However, they are also an ecosystem. Android has Android automotive, Google TV etc. iOS has Carplay, Apple TV, iPads etc. Not to mention web also plays a big role for apps.</p>
<p>Any reasonable company needs to launch their apps both for Android and iOS. Some of them such as streaming platforms might have to also target TVs or even Auto screens. This makes the development process complicated and expensive. A solution to this was initially proposed using webview where the app is nothing but a wrapper around webapp. However, the inherent limitations of the browser made this sort of approach less performant. ReactNative and Flutter were two major frameworks that were developed by Meta and Google respective. ReactNative uses JSX where as Flutter uses Dart. Both approaches were relatively successful though they had their own limitations.</p>
<p>The way these two frameworks woks is by using extremely low level graphics APIs provided by the operating systems (which are also used by say games) and then render UI using it. This makes app much faster than alternatives such as webview. The framework however is generally very complex and will rely on plugin architecture to use the various features of the OS such as as Camera API.</p>
<p>The advantage of multiplatform frameworks is that it makes app development very simple for simple apps but as the apps get complex it might present some challenges.</p>
<h2 id="heading-kotlin-multiplatform">Kotlin Multiplatform</h2>
<p>Kotlin multiplatform is even different take from the approach Flutter or ReactNative take. Kotlin presents merely a language wrapper around the native UI of Android and iOS. The Kotlin code gets translated into native binaries for Android and iOS without any rendering engine. This means apps are far more performant than Flutter or ReactNative.</p>
<p>Of course, Android and iOS are very different and hence not all code can not be shared across both platforms. Kotlin/Native provides bidirectional interoperability with Swift and Objective-C, enabling seamless integration of Kotlin code into iOS projects. Similarly it provides seemless integration with Androids SDK. Kotlin being a first class citizen for Android development has much better support for Android than iOS.</p>
<p>Compose multiplatform is another library that can be used too with Kotlin multiplatform. It is a separate project but is very complimentary.</p>
<h2 id="heading-sharing-business-logic-and-not-just-code">Sharing business logic and not just code</h2>
<p>While Flutter and ReactNative focused on allowing developers to have a single codebase, Kotlin Multiplatform has a slightly different objective. It recognizes the fact that Android and iOS might diverge from each other in many ways. It recognizes that engineers are willing to and even prefer to write OS specific code when the performance and efficient use of features matter the most but still do not want to duplicate the dumb parts.</p>
<p>This is where Kotlin multiplatform excels if you ask me. It splits the codebase into Android specific code, iOS specific code and common logic code. The common logic code is where you put all the core business logic, common algorithms and so on and in platform specific code you write more platform focused code. It is upto you to decide this boundary which gives immense flexibility.</p>
<p>It makes developers productivity gains as they do not have to learn multiple programming languages for most parts as single source of truth for business logic and efficient testing.</p>
<h2 id="heading-choosing-kotlin-multiplatform">Choosing Kotlin multiplatform</h2>
<p>Kotlin multiplatform all the benefits that are generally associated with Flutter and ReactNative but with better performance. There is some tradeoff as you have to still write decent amount of iOS and Android specific code but since the language is same, the productivity gains are significant.</p>
<p>However this is a new technology and is not fully mature, as adoption goes up and community support grows this will change but it also means betting a large project on this technology is risky. Also, iOS support is weaker than Android support. Given Apples closed nature this is likely to remain so always. if iOS is your primary target you might want to reconsider this technology.</p>
]]></description><link>https://androidauthority.dev/kotlin-multi-platform-development</link><guid isPermaLink="true">https://androidauthority.dev/kotlin-multi-platform-development</guid><category><![CDATA[Kotlin]]></category><category><![CDATA[Kotlin Multiplatform]]></category><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item><item><title><![CDATA[Android Automotive OS - Not android but Android like.]]></title><description><![CDATA[<p>Google might have managed to pull an interesting move that beats Apple. Cars these days are more about software experience than just automobiles. In fact they are software products. Car companies are notoriously bad at building software. Android Auto and Apple Car play however managed to entice users and car makers with their brilliant product.</p>
<p>When you use Android Auto or Apple Car play, the phone basically takes over your car console. The car becomes and extension of your phone and gives a pleasant experience. Your streaming services, navigation with stored addresses, calendar everything integrates with car seamlessly. Where is the problem then ? All the amazing features that Apple and Google give you without costing car company even a penny.</p>
<p>The problem is that, the car companies realized that there is more money in making software for cars. If you entirely outsource the software experience to Apple or Google, you have no idea how they are using your cars to make money. People are buying spotify premium or youtube premium to listen to podcasts while driving. While dozen other companies make money in the process, the car company does not make any money.</p>
<h2 id="heading-taking-charge-of-your-car">Taking charge of your car</h2>
<p>Car companies such as GM hence decided that their future cars wont support CarPlay or Android Auto. Instead they are rolling out Android Automotive OS, a Google OS that looks and feels like Android but for cars as part of the Cars software suite.</p>
<p>This allows GM to get more control of your cars software experience and also earn revenue from you in future. Even though it is Android Automotive OS, it is not same as Android Auto and still wont let you connect your Android phone to the console. It might still be able to sync things from your Android phone but the Apple users are pretty much ignored.</p>
<h2 id="heading-does-this-move-make-sense">Does this move make sense ?</h2>
<p>This move does not make sense in consumer point of view. Very likely, consumers would be upset with this and wont buy the cars and GM will end up allowing folks to use CarPlay and Android Auto. A lot of cars such as Polestar, Volvo, Honda Prologue etc. have adopted the strategy where they run Android Automotive OS but still let you use Android Auto and Apple CarPlay if you chose.</p>
<p>Giving consumers extra choice is a good idea. That forces competition to improve experience for the user and everyone wins. Locking people to a Car companys software experience in my opinion is a terrible idea.</p>
<h2 id="heading-why-google-is-a-winner">Why Google is a winner ?</h2>
<p>Google however seems to be a clear winner here either ways as Google Automotive OS is the only real car os available today for vehicle manufacturers. There are indeed other options such as Blackberrys RTOS etc. but ultimately, nothing comes closes to Google in terms reliability support and user friendliness.</p>
<p>Apple unfortunately does not have anything equivalent and unlikely to build anything as they do not generally deal with OEM partners.</p>
<p>As AI battle heats up we will also see Google Gemini in cars and we would be very interested in knowing how it plays out.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=YE5tWQI6PpU">https://www.youtube.com/watch?v=YE5tWQI6PpU</a></div>
]]></description><link>https://androidauthority.dev/android-automotive-os-not-android-but-android-like</link><guid isPermaLink="true">https://androidauthority.dev/android-automotive-os-not-android-but-android-like</guid><category><![CDATA[automotive]]></category><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Deepseek shows that the real money is in App layer.]]></title><description><![CDATA[<p>DeepSeek R1 is an open weights model and vastly better than anything available for free including Metas llama. Also the model has been trained on relatively lower budget than other competitors. What this shows is that the cost of making fairly good AI models is going down exponentially and making a model that is vastly better than everyone else is going to remain a hard problem</p>
<blockquote>
<h3 id="heading-on-publicly-available-iq-tests-deepseek-ties-or-exceeds-all-free-ais-but-is-far-behind-openais-models-for-paid-users-sourcehttpswwwmaximumtruthorgpchinas-deepseek-is-not-as-smart-as">On publicly-available IQ tests, Deepseek ties or exceeds all free AIs, but is far behind OpenAIs models for paid users [<a target="_blank" href="https://www.maximumtruth.org/p/chinas-deepseek-is-not-as-smart-as">source</a>]</h3>
</blockquote>
<p>This success by DeepSeek presents interesting economic challenges. It means that the competition in foundational model providers is going to heat up and as a result no company will have an advantage to price their model usage much higher. This eats into their margins.</p>
<p>This basically means that the app developers who simply build a wrapper around some existing model such as Perplexity.AI would be much more poised for success and profitability than the model company itself.</p>
<p>Perplexity has quickly announce that DeepSeeks US hosted version would be available for its users and more than 500 queries per day for their paid users. This means the competition in the app layer is going to heat up significantly and this bodes well for Android developers as well.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=Y5gL70cOvmw">https://www.youtube.com/watch?v=Y5gL70cOvmw</a></div>
]]></description><link>https://androidauthority.dev/deepseek-shows-that-the-real-money-is-in-app-layer</link><guid isPermaLink="true">https://androidauthority.dev/deepseek-shows-that-the-real-money-is-in-app-layer</guid><category><![CDATA[Deepseek]]></category><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item><item><title><![CDATA[CameraX - Improving camera usage in your Android App]]></title><description><![CDATA[<p>Cameras are one of the most important features of modern apps. However, using the <strong>Camera API</strong> has always been fairly complex for developers. Unlike the iPhone where Camera hardware is standardized, the Android ecosystem has many different types of hardware, and hence software development around it has been complicated.</p>
<p>Google heard the pain points and as part of their <a target="_blank" href="https://developer.android.com/media/camera/camerax">Jetpack set of tools, they released CameraX</a> an SDK that greatly simplifies developing apps with Camera. CameraX supports a wide range of Android devices running Android 5.0 (API level 21) and higher, covering over 98% of devices in use. CameraX focuses on several use cases that allows you to use camera in your app much more easily.</p>
<p>CameraX ensures consistent camera behavior across different devices, handling aspects like aspect ratio, orientation, rotation, preview size, and image size, which traditionally required a lot of manual effort.</p>
<blockquote>
<p>CameraX is a Jetpack library, built to help make camera app development easier. For new apps, we recommend starting with CameraX. It provides a consistent, easy-to-use API that works across the vast majority of Android devices, with backward-compatibility to Android 5.0 (API level 21). If you're migrating an app from Camera1, see our <a target="_blank" href="https://developer.android.com/training/camerax/camera1-to-camerax">Camera1 to CameraX migration guide</a>.</p>
</blockquote>
<h3 id="heading-ease-of-use">Ease of use</h3>
<p>CameraX emphasizes use cases, which allow you to focus on the task you need to get done instead of managing device-specific nuances. Most common camera use cases are supported:</p>
<ul>
<li><p><a target="_blank" href="https://developer.android.com/training/camerax/preview">Preview</a>: View an image on the display.</p>
</li>
<li><p><a target="_blank" href="https://developer.android.com/training/camerax/analyze">Image analysis</a>: Access a buffer seamlessly for use in your algorithms, such as to pass to ML Kit.</p>
</li>
<li><p><a target="_blank" href="https://developer.android.com/training/camerax/take-photo">Image capture</a>: Save images.</p>
</li>
<li><p><a target="_blank" href="https://developer.android.com/training/camerax/video-capture">Video capture</a>: Save video and audio.</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/I4rDx90Nlus">https://youtu.be/I4rDx90Nlus</a></div>
<p> </p>
<h2 id="heading-example-code">Example Code</h2>
<pre><code class="lang-kotlin"><span class="hljs-keyword">import</span> android.Manifest
<span class="hljs-keyword">import</span> android.content.pm.PackageManager
<span class="hljs-keyword">import</span> android.os.Bundle
<span class="hljs-keyword">import</span> android.util.Log
<span class="hljs-keyword">import</span> androidx.activity.ComponentActivity
<span class="hljs-keyword">import</span> androidx.activity.compose.setContent
<span class="hljs-keyword">import</span> androidx.camera.core.CameraSelector
<span class="hljs-keyword">import</span> androidx.camera.core.ImageAnalysis
<span class="hljs-keyword">import</span> androidx.camera.core.ImageProxy
<span class="hljs-keyword">import</span> androidx.camera.core.Preview
<span class="hljs-keyword">import</span> androidx.camera.lifecycle.ProcessCameraProvider
<span class="hljs-keyword">import</span> androidx.camera.view.PreviewView
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.Column
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.fillMaxSize
<span class="hljs-keyword">import</span> androidx.compose.material.Text
<span class="hljs-keyword">import</span> androidx.compose.runtime.*
<span class="hljs-keyword">import</span> androidx.compose.ui.Modifier
<span class="hljs-keyword">import</span> androidx.compose.ui.platform.LocalContext
<span class="hljs-keyword">import</span> androidx.compose.ui.platform.LocalLifecycleOwner
<span class="hljs-keyword">import</span> androidx.compose.ui.viewinterop.AndroidView
<span class="hljs-keyword">import</span> androidx.core.content.ContextCompat
<span class="hljs-keyword">import</span> java.util.concurrent.ExecutorService
<span class="hljs-keyword">import</span> java.util.concurrent.Executors

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MainActivity</span>: <span class="hljs-type">ComponentActivity</span></span>() {
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">lateinit</span> <span class="hljs-keyword">var</span> cameraExecutor: ExecutorService

    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onCreate</span><span class="hljs-params">(savedInstanceState: <span class="hljs-type">Bundle</span>?)</span></span> {
        <span class="hljs-keyword">super</span>.onCreate(savedInstanceState)
        cameraExecutor = Executors.newSingleThreadExecutor()
        setContent {
            CameraScreen(cameraExecutor)
        }
    }

    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onDestroy</span><span class="hljs-params">()</span></span> {
        <span class="hljs-keyword">super</span>.onDestroy()
        cameraExecutor.shutdown()
    }
}

<span class="hljs-meta">@Composable</span>
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">CameraScreen</span><span class="hljs-params">(cameraExecutor: <span class="hljs-type">ExecutorService</span>)</span></span> {
    <span class="hljs-keyword">val</span> context = LocalContext.current
    <span class="hljs-keyword">val</span> lifecycleOwner = LocalLifecycleOwner.current
    <span class="hljs-keyword">var</span> preview <span class="hljs-keyword">by</span> remember { mutableStateOf&lt;Preview?&gt;(<span class="hljs-literal">null</span>) }
    <span class="hljs-keyword">val</span> cameraProviderFuture = remember { ProcessCameraProvider.getInstance(context) }

    Column(modifier = Modifier.fillMaxSize()) {
        AndroidView(
            modifier = Modifier.fillMaxSize(),
            factory = { ctx -&gt;
                <span class="hljs-keyword">val</span> previewView = PreviewView(ctx)
                <span class="hljs-keyword">val</span> executor = ContextCompat.getMainExecutor(ctx)
                cameraProviderFuture.addListener({
                    <span class="hljs-keyword">val</span> cameraProvider = cameraProviderFuture.<span class="hljs-keyword">get</span>()
                    preview = Preview.Builder().build().also {
                        it.setSurfaceProvider(previewView.surfaceProvider)
                    }

                    <span class="hljs-keyword">val</span> imageAnalyzer = ImageAnalysis.Builder()
                      .build()
                      .also {
                            it.setAnalyzer(cameraExecutor, LuminosityAnalyzer { luma -&gt;
                                Log.d(<span class="hljs-string">"CameraXBasic"</span>, <span class="hljs-string">"Average luminosity: <span class="hljs-variable">$luma</span>"</span>)
                            })
                        }

                    <span class="hljs-keyword">val</span> cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

                    <span class="hljs-keyword">try</span> {
                        cameraProvider.unbindAll()
                        cameraProvider.bindToLifecycle(
                            lifecycleOwner, cameraSelector, preview, imageAnalyzer
                        )
                    } <span class="hljs-keyword">catch</span> (e: Exception) {
                        Log.e(<span class="hljs-string">"CameraXBasic"</span>, <span class="hljs-string">"Camera bind failed"</span>, e)
                    }
                }, executor)
                previewView
            }
        )
        <span class="hljs-comment">// You can add other Compose UI elements here</span>
        Text(<span class="hljs-string">"Camera Preview"</span>)
    }

    <span class="hljs-comment">// Check and request camera permissions</span>
    RequestCameraPermission()
}

<span class="hljs-keyword">private</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">LuminosityAnalyzer</span></span>(<span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> listener: (luma: <span class="hljs-built_in">Double</span>) -&gt; <span class="hljs-built_in">Unit</span>): ImageAnalysis.Analyzer {

    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">analyze</span><span class="hljs-params">(image: <span class="hljs-type">ImageProxy</span>)</span></span> {

        <span class="hljs-keyword">val</span> buffer = image.planes.buffer
        <span class="hljs-keyword">val</span> <span class="hljs-keyword">data</span> = buffer.toByteArray()
        <span class="hljs-keyword">val</span> pixels = <span class="hljs-keyword">data</span>.map { it.toInt() and <span class="hljs-number">0xFF</span> }
        <span class="hljs-keyword">val</span> luma = pixels.average()

        listener(luma)

        image.close()
    }
}

<span class="hljs-comment">// Permission handling</span>
<span class="hljs-meta">@Composable</span>
<span class="hljs-keyword">private</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">RequestCameraPermission</span><span class="hljs-params">()</span></span> {
    <span class="hljs-keyword">val</span> context = LocalContext.current
    <span class="hljs-keyword">if</span> (ContextCompat.checkSelfPermission(
            context,
            Manifest.permission.CAMERA
        )!= PackageManager.PERMISSION_GRANTED
    ) {
        ActivityCompat.requestPermissions(
            context <span class="hljs-keyword">as</span> ComponentActivity,
            arrayOf(Manifest.permission.CAMERA),
            <span class="hljs-number">123</span> <span class="hljs-comment">// Request code</span>
        )
    }
}

<span class="hljs-function"><span class="hljs-keyword">fun</span> ByteArray.<span class="hljs-title">toByteArray</span><span class="hljs-params">()</span></span>: ByteArray {
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">this</span>
}
</code></pre>
<p>If you want to start learning using CameraX <a target="_blank" href="https://developer.android.com/codelabs/camerax-getting-started#0">please start here</a>.</p>
]]></description><link>https://androidauthority.dev/camerax-improving-camera-usage-in-your-android-app</link><guid isPermaLink="true">https://androidauthority.dev/camerax-improving-camera-usage-in-your-android-app</guid><category><![CDATA[camerax]]></category><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[Making Android Apps using Generative AI]]></title><description><![CDATA[<p>Generative AI has made writing code much simpler. Making complicated apps historically was time consuming process. But the modern AI tools have made it very simple. In this post we will see how we can generate a simple Android app using Gemini.</p>
<blockquote>
<p>Can you generate code for an Android app using Jetpack that shows top headlines</p>
</blockquote>
<p>Google Gemini is able to generate fairly complex code using this simple prompt.</p>
<pre><code class="lang-kotlin"><span class="hljs-keyword">package</span> com.example.newsapp

<span class="hljs-keyword">import</span> android.os.Bundle
<span class="hljs-keyword">import</span> androidx.activity.ComponentActivity
<span class="hljs-keyword">import</span> androidx.activity.compose.setContent
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.Arrangement
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.Column
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.fillMaxSize
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.padding
<span class="hljs-keyword">import</span> androidx.compose.foundation.lazy.LazyColumn
<span class="hljs-keyword">import</span> androidx.compose.foundation.lazy.items
<span class="hljs-keyword">import</span> androidx.compose.material3.Card
<span class="hljs-keyword">import</span> androidx.compose.material3.CardDefaults
<span class="hljs-keyword">import</span> androidx.compose.material3.ExperimentalMaterial3Api
<span class="hljs-keyword">import</span> androidx.compose.material3.MaterialTheme
<span class="hljs-keyword">import</span> androidx.compose.material3.Scaffold
<span class="hljs-keyword">import</span> androidx.compose.material3.Text
<span class="hljs-keyword">import</span> androidx.compose.material3.TopAppBar
<span class="hljs-keyword">import</span> androidx.compose.runtime.Composable
<span class="hljs-keyword">import</span> androidx.compose.runtime.LaunchedEffect
<span class="hljs-keyword">import</span> androidx.compose.runtime.getValue
<span class="hljs-keyword">import</span> androidx.compose.runtime.mutableStateOf
<span class="hljs-keyword">import</span> androidx.compose.runtime.remember
<span class="hljs-keyword">import</span> androidx.compose.runtime.setValue
<span class="hljs-keyword">import</span> androidx.compose.ui.Alignment
<span class="hljs-keyword">import</span> androidx.compose.ui.Modifier
<span class="hljs-keyword">import</span> androidx.compose.ui.unit.dp
<span class="hljs-keyword">import</span> androidx.lifecycle.viewmodel.compose.viewModel
<span class="hljs-keyword">import</span> com.example.newsapp.<span class="hljs-keyword">data</span>.NewsApi
<span class="hljs-keyword">import</span> com.example.newsapp.<span class="hljs-keyword">data</span>.model.Article
<span class="hljs-keyword">import</span> com.example.newsapp.ui.theme.NewsAppTheme
<span class="hljs-keyword">import</span> kotlinx.coroutines.Dispatchers
<span class="hljs-keyword">import</span> kotlinx.coroutines.withContext

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MainActivity</span>: <span class="hljs-type">ComponentActivity</span></span>() {
    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onCreate</span><span class="hljs-params">(savedInstanceState: <span class="hljs-type">Bundle</span>?)</span></span> {
        <span class="hljs-keyword">super</span>.onCreate(savedInstanceState)
        setContent {
            NewsAppTheme {
                NewsApp()
            }
        }
    }
}

<span class="hljs-meta">@OptIn(ExperimentalMaterial3Api::class)</span>
<span class="hljs-meta">@Composable</span>
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">NewsApp</span><span class="hljs-params">(viewModel: <span class="hljs-type">NewsViewModel</span> = viewModel()</span></span>) {
    <span class="hljs-keyword">var</span> headlines <span class="hljs-keyword">by</span> remember { mutableStateOf&lt;List&lt;Article&gt;&gt;(emptyList()) }

    LaunchedEffect(<span class="hljs-built_in">Unit</span>) {
        withContext(Dispatchers.IO) {
            headlines = viewModel.getTopHeadlines()
        }
    }

    Scaffold(
        topBar = {
            TopAppBar(title = { Text(<span class="hljs-string">"Top Headlines"</span>) })
        }
    ) { innerPadding -&gt;
        <span class="hljs-keyword">if</span> (headlines.isNotEmpty()) {
            LazyColumn(
                modifier = Modifier
                  .padding(innerPadding)
                  .fillMaxSize(),
                verticalArrangement = Arrangement.spacedBy(<span class="hljs-number">8</span>.dp)
            ) {
                items(headlines) { article -&gt;
                    NewsCard(article)
                }
            }
        } <span class="hljs-keyword">else</span> {
            Column(
                modifier = Modifier
                  .padding(innerPadding)
                  .fillMaxSize(),
                verticalArrangement = Arrangement.Center,
                horizontalAlignment = Alignment.CenterHorizontally
            ) {
                Text(<span class="hljs-string">"Loading..."</span>)
            }
        }
    }
}

<span class="hljs-meta">@Composable</span>
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">NewsCard</span><span class="hljs-params">(article: <span class="hljs-type">Article</span>)</span></span> {
    Card(
        elevation = CardDefaults.cardElevation(defaultElevation = <span class="hljs-number">2</span>.dp),
        modifier = Modifier.padding(<span class="hljs-number">8</span>.dp)
    ) {
        Column(modifier = Modifier.padding(<span class="hljs-number">16</span>.dp)) {
            Text(
                text = article.title?: <span class="hljs-string">""</span>,
                style = MaterialTheme.typography.headlineSmall
            )
            Text(
                text = article.description?: <span class="hljs-string">""</span>,
                style = MaterialTheme.typography.bodyMedium
            )
        }
    }
}

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">NewsViewModel</span> </span>{
    <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">getTopHeadlines</span><span class="hljs-params">()</span></span>: List&lt;Article&gt; {
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">try</span> {
            NewsApi.retrofitService.getTopHeadlines(<span class="hljs-string">"us"</span>).articles
        } <span class="hljs-keyword">catch</span> (e: Exception) {
            emptyList()
        }
    }
}
</code></pre>
<p>Gradle file generated.</p>
<pre><code class="lang-kotlin">plugins {
    id(<span class="hljs-string">"com.android.application"</span>)
    id(<span class="hljs-string">"org.jetbrains.kotlin.android"</span>)
}

android {
    namespace = <span class="hljs-string">"com.example.newsapp"</span>
    compileSdk = <span class="hljs-number">33</span>

    defaultConfig {
        applicationId = <span class="hljs-string">"com.example.newsapp"</span>
        minSdk = <span class="hljs-number">24</span>
        targetSdk = <span class="hljs-number">33</span>
        versionCode = <span class="hljs-number">1</span>
        versionName = <span class="hljs-string">"1.0"</span>

        testInstrumentationRunner = <span class="hljs-string">"androidx.test.runner.AndroidJUnitRunner"</span>
        vectorDrawables {
            useSupportLibrary = <span class="hljs-literal">true</span>
        }
    }

    buildTypes {
        release {
            isMinifyEnabled = <span class="hljs-literal">false</span>
            proguardFiles(
                getDefaultProguardFile(<span class="hljs-string">"proguard-android-optimize.txt"</span>),
                <span class="hljs-string">"proguard-rules.pro"</span>
            )
        }
    }
    compileOptions {
        sourceCompatibility = JavaVersion.VERSION_1_8
        targetCompatibility = JavaVersion.VERSION_1_8
    }
    kotlinOptions {
        jvmTarget = <span class="hljs-string">"1.8"</span>
    }
    buildFeatures {
        compose = <span class="hljs-literal">true</span>
    }
    composeOptions {
        kotlinCompilerExtensionVersion = <span class="hljs-string">"1.4.3"</span>
    }
    packaging {
        resources {
            excludes += <span class="hljs-string">"/META-INF/{AL2.0,LGPL2.1}"</span>
        }
    }
}

dependencies {

    implementation(<span class="hljs-string">"androidx.core:core-ktx:1.9.0"</span>)
    implementation(<span class="hljs-string">"androidx.lifecycle:lifecycle-runtime-ktx:2.6.2"</span>)
    implementation(<span class="hljs-string">"androidx.activity:activity-compose:1.8.1"</span>)
    implementation(platform(<span class="hljs-string">"androidx.compose:compose-bom:2023.03.00"</span>))
    implementation(<span class="hljs-string">"androidx.compose.ui:ui"</span>)
    implementation(<span class="hljs-string">"androidx.compose.ui:ui-graphics"</span>)
    implementation(<span class="hljs-string">"androidx.compose.ui:ui-tooling-preview"</span>)
    implementation(<span class="hljs-string">"androidx.compose.material3:material3"</span>)
    testImplementation(<span class="hljs-string">"junit:junit:4.13.2"</span>)
    androidTestImplementation(<span class="hljs-string">"androidx.test.ext:junit:1.1.5"</span>)
    androidTestImplementation(<span class="hljs-string">"androidx.test.espresso:espresso-core:3.5.1"</span>)
    androidTestImplementation(platform(<span class="hljs-string">"androidx.compose:compose-bom:2023.03.00"</span>))
    androidTestImplementation(<span class="hljs-string">"androidx.compose.ui:ui-test-junit4"</span>)
    debugImplementation(<span class="hljs-string">"androidx.compose.ui:ui-tooling"</span>)
    debugImplementation(<span class="hljs-string">"androidx.compose.ui:ui-test-manifest"</span>)

    <span class="hljs-comment">// Retrofit</span>
    implementation(<span class="hljs-string">"com.squareup.retrofit2:retrofit:2.9.0"</span>)
    implementation(<span class="hljs-string">"com.squareup.retrofit2:converter-gson:2.9.0"</span>)

    <span class="hljs-comment">// ViewModel</span>
    implementation(<span class="hljs-string">"androidx.lifecycle:lifecycle-viewmodel-compose:2.6.2"</span>)
}
</code></pre>
<pre><code class="lang-kotlin"><span class="hljs-keyword">package</span> com.example.newsapp.<span class="hljs-keyword">data</span>

<span class="hljs-keyword">import</span> com.example.newsapp.<span class="hljs-keyword">data</span>.model.NewsResponse
<span class="hljs-keyword">import</span> retrofit2.Retrofit
<span class="hljs-keyword">import</span> retrofit2.converter.gson.GsonConverterFactory
<span class="hljs-keyword">import</span> retrofit2.http.GET
<span class="hljs-keyword">import</span> retrofit2.http.Query

<span class="hljs-comment">// Replace with your actual API key</span>
<span class="hljs-keyword">private</span> <span class="hljs-keyword">const</span> <span class="hljs-keyword">val</span> API_KEY = <span class="hljs-string">"YOUR_API_KEY"</span>

<span class="hljs-keyword">private</span> <span class="hljs-keyword">const</span> <span class="hljs-keyword">val</span> BASE_URL = <span class="hljs-string">"https://newsapi.org/v2/"</span>

<span class="hljs-keyword">private</span> <span class="hljs-keyword">val</span> retrofit = Retrofit.Builder()
  .addConverterFactory(GsonConverterFactory.create())
  .baseUrl(BASE_URL)
  .build()

<span class="hljs-class"><span class="hljs-keyword">interface</span> <span class="hljs-title">NewsApiService</span> </span>{
    <span class="hljs-meta">@GET(<span class="hljs-meta-string">"top-headlines"</span>)</span>
    <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">getTopHeadlines</span><span class="hljs-params">(
        <span class="hljs-meta">@Query(<span class="hljs-meta-string">"country"</span>)</span> country: <span class="hljs-type">String</span>,
        <span class="hljs-meta">@Query(<span class="hljs-meta-string">"apiKey"</span>)</span> apiKey: <span class="hljs-type">String</span> = API_KEY
    )</span></span>: NewsResponse
}

<span class="hljs-keyword">object</span> NewsApi {
    <span class="hljs-keyword">val</span> retrofitService: NewsApiService <span class="hljs-keyword">by</span> lazy {
        retrofit.create(NewsApiService::<span class="hljs-keyword">class</span>.java)
    }
}
</code></pre>
<pre><code class="lang-kotlin"><span class="hljs-keyword">package</span> com.example.newsapp.<span class="hljs-keyword">data</span>.model

<span class="hljs-keyword">import</span> com.google.gson.annotations.SerializedName

<span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">NewsResponse</span></span>(
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"articles"</span>)</span> <span class="hljs-keyword">val</span> articles: List&lt;Article&gt;,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"status"</span>)</span> <span class="hljs-keyword">val</span> status: String,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"totalResults"</span>)</span> <span class="hljs-keyword">val</span> totalResults: <span class="hljs-built_in">Int</span>
)

<span class="hljs-keyword">data</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">Article</span></span>(
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"author"</span>)</span> <span class="hljs-keyword">val</span> author: String?,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"title"</span>)</span> <span class="hljs-keyword">val</span> title: String?,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"description"</span>)</span> <span class="hljs-keyword">val</span> description: String?,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"url"</span>)</span> <span class="hljs-keyword">val</span> url: String?,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"urlToImage"</span>)</span> <span class="hljs-keyword">val</span> urlToImage: String?,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"publishedAt"</span>)</span> <span class="hljs-keyword">val</span> publishedAt: String?,
    <span class="hljs-meta">@SerializedName(<span class="hljs-meta-string">"content"</span>)</span> <span class="hljs-keyword">val</span> content: String?
)
</code></pre>
<p>This is full working code generated by Google Gemini.</p>
<p>You can add more support for features like navigation, deep links, offline support, better error handling etc.</p>
<blockquote>
<p>Can you add NewsCard which displays the news and when clicked it opens up the news article in a separate screen.</p>
</blockquote>
<pre><code class="lang-kotlin"><span class="hljs-keyword">package</span> com.example.newsapp

<span class="hljs-keyword">import</span> android.content.Intent
<span class="hljs-keyword">import</span> android.net.Uri
<span class="hljs-keyword">import</span> android.os.Bundle
<span class="hljs-keyword">import</span> androidx.activity.ComponentActivity
<span class="hljs-keyword">import</span> androidx.activity.compose.setContent
<span class="hljs-keyword">import</span> androidx.compose.foundation.clickable
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.Arrangement
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.Column
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.fillMaxSize
<span class="hljs-keyword">import</span> androidx.compose.foundation.layout.padding
<span class="hljs-keyword">import</span> androidx.compose.foundation.lazy.LazyColumn
<span class="hljs-keyword">import</span> androidx.compose.foundation.lazy.items
<span class="hljs-keyword">import</span> androidx.compose.material3.Card
<span class="hljs-keyword">import</span> androidx.compose.material3.CardDefaults
<span class="hljs-keyword">import</span> androidx.compose.material3.ExperimentalMaterial3Api
<span class="hljs-keyword">import</span> androidx.compose.material3.MaterialTheme
<span class="hljs-keyword">import</span> androidx.compose.material3.Scaffold
<span class="hljs-keyword">import</span> androidx.compose.material3.Text
<span class="hljs-keyword">import</span> androidx.compose.material3.TopAppBar
<span class="hljs-keyword">import</span> androidx.compose.runtime.Composable
<span class="hljs-keyword">import</span> androidx.compose.runtime.LaunchedEffect
<span class="hljs-keyword">import</span> androidx.compose.runtime.getValue
<span class="hljs-keyword">import</span> androidx.compose.runtime.mutableStateOf
<span class="hljs-keyword">import</span> androidx.compose.runtime.remember
<span class="hljs-keyword">import</span> androidx.compose.runtime.setValue
<span class="hljs-keyword">import</span> androidx.compose.ui.Modifier
<span class="hljs-keyword">import</span> androidx.compose.ui.platform.LocalContext
<span class="hljs-keyword">import</span> androidx.compose.ui.unit.dp
<span class="hljs-keyword">import</span> androidx.lifecycle.viewmodel.compose.viewModel
<span class="hljs-keyword">import</span> com.example.newsapp.<span class="hljs-keyword">data</span>.NewsApi
<span class="hljs-keyword">import</span> com.example.newsapp.<span class="hljs-keyword">data</span>.model.Article
<span class="hljs-keyword">import</span> com.example.newsapp.ui.theme.NewsAppTheme
<span class="hljs-keyword">import</span> kotlinx.coroutines.Dispatchers
<span class="hljs-keyword">import</span> kotlinx.coroutines.withContext

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">MainActivity</span>: <span class="hljs-type">ComponentActivity</span></span>() {
    <span class="hljs-keyword">override</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">onCreate</span><span class="hljs-params">(savedInstanceState: <span class="hljs-type">Bundle</span>?)</span></span> {
        <span class="hljs-keyword">super</span>.onCreate(savedInstanceState)
        setContent {
            NewsAppTheme {
                NewsApp()
            }
        }
    }
}

<span class="hljs-meta">@OptIn(ExperimentalMaterial3Api::class)</span>
<span class="hljs-meta">@Composable</span>
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">NewsApp</span><span class="hljs-params">(viewModel: <span class="hljs-type">NewsViewModel</span> = viewModel()</span></span>) {
    <span class="hljs-keyword">var</span> headlines <span class="hljs-keyword">by</span> remember { mutableStateOf&lt;List&lt;Article&gt;&gt;(emptyList()) }

    LaunchedEffect(<span class="hljs-built_in">Unit</span>) {
        withContext(Dispatchers.IO) {
            headlines = viewModel.getTopHeadlines()
        }
    }

    Scaffold(
        topBar = {
            TopAppBar(title = { Text(<span class="hljs-string">"Top Headlines"</span>) })
        }
    ) { innerPadding -&gt;
        LazyColumn(
            modifier = Modifier
              .padding(innerPadding)
              .fillMaxSize(),
            verticalArrangement = Arrangement.spacedBy(<span class="hljs-number">8</span>.dp)
        ) {
            items(headlines) { article -&gt;
                NewsCard(article)
            }
        }
    }
}

<span class="hljs-meta">@Composable</span>
<span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">NewsCard</span><span class="hljs-params">(article: <span class="hljs-type">Article</span>)</span></span> {
    <span class="hljs-keyword">val</span> context = LocalContext.current
    Card(
        elevation = CardDefaults.cardElevation(defaultElevation = <span class="hljs-number">2</span>.dp),
        modifier = Modifier
          .padding(<span class="hljs-number">8</span>.dp)
          .clickable {
                <span class="hljs-keyword">val</span> intent = Intent(Intent.ACTION_VIEW, Uri.parse(article.url))
                context.startActivity(intent)
            }
    ) {
        Column(modifier = Modifier.padding(<span class="hljs-number">16</span>.dp)) {
            Text(
                text = article.title?: <span class="hljs-string">""</span>,
                style = MaterialTheme.typography.headlineSmall
            )
            Text(
                text = article.description?: <span class="hljs-string">""</span>,
                style = MaterialTheme.typography.bodyMedium
            )
        }
    }
}

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">NewsViewModel</span> </span>{
    <span class="hljs-keyword">suspend</span> <span class="hljs-function"><span class="hljs-keyword">fun</span> <span class="hljs-title">getTopHeadlines</span><span class="hljs-params">()</span></span>: List&lt;Article&gt; {
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">try</span> {
            NewsApi.retrofitService.getTopHeadlines(<span class="hljs-string">"us"</span>).articles
        } <span class="hljs-keyword">catch</span> (e: Exception) {
            emptyList()
        }
    }
}
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>It is much easier to generate full apps using Google Gemini or other generative AI tools. However you need to have some prompt skills and be able to modify few things yourself after the app code is produced.</p>
<p>What do you think ?</p>
]]></description><link>https://androidauthority.dev/making-android-apps-using-generative-ai</link><guid isPermaLink="true">https://androidauthority.dev/making-android-apps-using-generative-ai</guid><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[DeepSeek is the first Chinese AI app to reach top 10 apps in USA]]></title><description><![CDATA[<p><a target="_blank" href="https://play.google.com/store/apps/details?id=com.deepseek.chat&amp;hl=en_US">DeepSeek has hit the world by a storm</a>. Not only it is an excellent model it is also open weight model and even the training methodology is made public.</p>
<p>DeepSeek is the first non US AI app to have hit the top 10 most installed apps this week. It is on spot 3. To give you context Claude has not managed to break into top 100 yet.</p>
<p><img src="https://play-lh.googleusercontent.com/BGuY9pzYHfcISgmYZOCt3fy27JVwpin5KkdgFLAmyFI5i2h9BptdqS4-u3CJghbIG3oj=w526-h296-rw" alt="Screenshot image" /></p>
<p>DeepSeek right now is seeing around 300K installs per day in USA.</p>
<p><img src="https://www.geeky-gadgets.com/wp-content/uploads/2024/06/DeepSeek-Coder-2-beats-GPT4-Turbo.webp" alt="DeepSeek Coder 2 beats GPT4-Turbo open source coding model - Geeky Gadgets" /></p>
<p>DeepSeek does excellent across all benchmarks so far and they do this at a fraction of cost.</p>
<p>DeepSeek supports an AI powered web search. It can search the web based on natural language queries you ask it, it can fetch results and give you much better summary than visiting links yourself.</p>
<p>I asked it a simple question Who is Milton Friedman and it responded with the following result.</p>
<blockquote>
<p>Milton Friedman (19122006) was an American economist, statistician, and Nobel laureate who is widely regarded as one of the most influential economists of the 20th century. He was a leading figure in the Chicago School of Economics and a staunch advocate of free-market capitalism, limited government intervention, and individual freedom. Here are some key aspects of his life and contributions:</p>
</blockquote>
<p>I asked it to find Find good shoe deals online</p>
<p>It searched the internet for deals. I am producing only the two deals.</p>
<blockquote>
<h3 id="heading-1-amazon"><strong>1. Amazon</strong></h3>
<ul>
<li><p><strong>Deal</strong>: Amazon Essentials Womens ballet flats for **32(regularly32(<em>regularly</em>59) and other discounted shoes under $25.</p>
</li>
<li><p><strong>Features</strong>: Free two-day shipping for Prime members, 30-day return policy, and a wide variety of brands and styles.</p>
</li>
<li><p><strong>Best For</strong>: Budget-friendly options and a vast selection of casual and athletic shoes28.</p>
</li>
</ul>
<hr />
<h3 id="heading-2-zappos"><strong>2. Zappos</strong></h3>
<ul>
<li><p><strong>Deal</strong>: Hoka Bondi 8 running shoes for **97.11(originally97.11(<em>originally</em>165) and other winter clearance deals starting at $26.</p>
</li>
<li><p><strong>Features</strong>: Free shipping, 365-day return policy, and discounts up to 80% on brands like Hoka, Adidas, and Keen.</p>
</li>
<li><p><strong>Best For</strong>: High-quality athletic and outdoor footwear411.</p>
</li>
</ul>
</blockquote>
<p>But was this deal real ? Let me check.</p>
<p>These deals were not real. DeepSeek found some low quality articles about show deals and provided those links in the response. There were no real shoe deals out there.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I think DeepSeeks success is remarkable for China but it is no where as good as it is made out to be. The hype around DeepSeek will settle down pretty soon as reality dawns on people. It is still pretty poor in its quality.</p>
]]></description><link>https://androidauthority.dev/deepseek-is-the-first-chinese-ai-app-to-reach-top-10-apps-in-usa</link><guid isPermaLink="true">https://androidauthority.dev/deepseek-is-the-first-chinese-ai-app-to-reach-top-10-apps-in-usa</guid><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item><item><title><![CDATA[The First Beta of Android 16]]></title><description><![CDATA[<p>The first beta of Android 16 is now available, which means it's time to open the experience up to both developers and early adopters. You can now <a target="_blank" href="https://www.google.com/android/beta">enroll any supported Pixel device here</a> to get this and future Android Beta updates over-the-air.</p>
<p>This build includes support for the future of app adaptivity, Live Updates, the Advanced Professional Video format, and more. Were looking forward to <a target="_blank" href="https://developer.android.com/about/versions/16/feedback">hearing what you think</a>, and thank you in advance for your continued help in making Android a platform that works for everyone.</p>
<h2 id="heading-android-adaptive-apps"><strong>Android adaptive apps</strong></h2>
<p>Users expect apps to work seamlessly on all their devices, regardless of display size and form factor. To that end, Android 16 is <a target="_blank" href="https://android-developers.googleblog.com/2025/01/orientation-and-resizability-changes-in-android-16.html">phasing out the ability</a> for apps to restrict screen orientation and resizability on large screens. This is similar to features OEMs have added over the last several years to large screen devices to allow users to run apps at any window size and aspect ratio.</p>
<p>On screens larger than 600dp wide, apps that target API level 36 will have app windows that resize; you should check your apps to ensure your existing UIs scale seamlessly, working well across portrait and landscape aspect ratios. We're providing <a target="_blank" href="https://developer.android.com/develop/ui/compose/layouts/adaptive">frameworks, tooling, and libraries</a> to help.</p>
<h4 id="heading-key-changes"><strong>Key changes:</strong></h4>
<ul>
<li><ul>
<li><a target="_blank" href="https://developer.android.com/about/versions/16/behavior-changes-16#implementation-details">Manifest attributes and APIs that restrict orientation and resizing</a> will be ignored for apps  but not games  on large screens.</li>
</ul>
</li>
</ul>
<h4 id="heading-timeline"><strong>Timeline:</strong></h4>
<ul>
<li><ul>
<li><p><strong>Android 16 (2025):</strong> Changes apply to large screens (600dp in width) for apps targeting API level 36 (developers can opt-out)</p>
<ul>
<li><p><strong>Android release in 2026:</strong> Changes apply to large screens for apps targeting API level 37 (no opt-out)</p>
</li>
<li><p>It's a great time to make your app adaptive! You can test these overrides without targeting using the <a target="_blank" href="https://developer.android.com/guide/app-compatibility/test-debug">app compatibility framework</a> by enabling the <a target="_blank" href="https://developer.android.com/about/versions/16/reference/compat-framework-changes#universal_resizable_by_default">UNIVERSAL_RESIZABLE_BY_DEFAULT</a> flag. Learn more about <a target="_blank" href="https://android-developers.googleblog.com/2025/01/orientation-and-resizability-changes-in-android-16.html">changes to orientation and resizability APIs in Android 16</a>.</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>Google official blog post: <a target="_blank" href="https://android-developers.googleblog.com/2025/01/first-beta-android16.html">https://android-developers.googleblog.com/2025/01/first-beta-android16.html</a></p>
]]></description><link>https://androidauthority.dev/the-first-beta-of-android-16</link><guid isPermaLink="true">https://androidauthority.dev/the-first-beta-of-android-16</guid><dc:creator><![CDATA[Wiseland AI Engineering Team]]></dc:creator></item><item><title><![CDATA[DeepSeek R1 - Chinese challenge to OpenAI on budget]]></title><description><![CDATA[<p>In the rapidly evolving world of artificial intelligence, new platforms and technologies are constantly emerging, each promising to revolutionize the way we interact with machines. Among these, <strong>DeepSeek</strong> has emerged as a standout player, offering a unique blend of capabilities that set it apart from the competition. But what exactly is DeepSeek, and what makes it so special? Lets dive in.</p>
<h3 id="heading-what-is-deepseek"><strong>What is DeepSeek?</strong></h3>
<p>DeepSeek is an advanced AI platform designed to push the boundaries of machine learning, natural language processing (NLP), and data analytics. At its core, DeepSeek is built to empower businesses, researchers, and developers with tools that enable smarter decision-making, enhanced automation, and deeper insights into complex datasets.</p>
<p>Unlike traditional AI systems that focus on narrow applications, DeepSeek is a versatile platform that can be adapted to a wide range of industries, from healthcare and finance to retail and entertainment. Its ability to process and analyze vast amounts of data in real-time makes it a powerful tool for anyone looking to harness the full potential of AI.</p>
<h3 id="heading-what-makes-deepseek-unique"><strong>What Makes DeepSeek Unique?</strong></h3>
<p>While there are many AI platforms available today, DeepSeek stands out for several key reasons:</p>
<h4 id="heading-1-unparalleled-scalability">1. <strong>Unparalleled Scalability</strong></h4>
<p>DeepSeek is designed to handle massive datasets with ease. Whether youre working with millions of customer records or complex scientific data, DeepSeeks infrastructure ensures that performance remains consistent, even as your needs grow. This scalability makes it an ideal choice for enterprises and organizations with demanding data requirements.</p>
<h4 id="heading-2-advanced-natural-language-processing">2. <strong>Advanced Natural Language Processing</strong></h4>
<p>One of DeepSeeks standout features is its state-of-the-art NLP capabilities. The platform can understand, interpret, and generate human language with remarkable accuracy. This makes it perfect for applications like chatbots, virtual assistants, and sentiment analysis, where understanding context and nuance is critical.</p>
<h4 id="heading-3-real-time-learning-and-adaptation">3. <strong>Real-Time Learning and Adaptation</strong></h4>
<p>DeepSeek isnt just a static AI systemits a dynamic platform that learns and adapts in real-time. By continuously analyzing new data, DeepSeek can refine its models and improve its performance over time. This means that the more you use it, the smarter it gets.</p>
<h4 id="heading-4-seamless-integration">4. <strong>Seamless Integration</strong></h4>
<p>DeepSeek is designed to integrate seamlessly with existing systems and workflows. Whether youre using cloud-based services, on-premise servers, or a hybrid infrastructure, DeepSeek can be easily incorporated into your tech stack. This flexibility reduces the time and effort required to get up and running.</p>
<h4 id="heading-5-ethical-ai-practices">5. <strong>Ethical AI Practices</strong></h4>
<p>In an era where ethical concerns around AI are increasingly prominent, DeepSeek takes a proactive approach to responsible AI development. The platform is built with transparency, fairness, and accountability in mind, ensuring that its algorithms are free from bias and its applications are used ethically.</p>
<h4 id="heading-6-industry-specific-solutions">6. <strong>Industry-Specific Solutions</strong></h4>
<p>DeepSeek goes beyond generic AI tools by offering tailored solutions for specific industries. For example, in healthcare, it can analyze patient data to predict disease outbreaks or recommend personalized treatment plans. In finance, it can detect fraudulent transactions or optimize investment strategies. This industry-specific focus ensures that users get the most relevant and impactful results.</p>
<h2 id="heading-costs-of-deepseek">Costs of deepseek</h2>
<p>DeepSeeks affordability is no accidentits the result of thoughtful design, efficient resource management, and a commitment to making advanced AI accessible to everyone. By leveraging open-source technologies, cloud-native architecture, and automated tools, DeepSeek delivers a powerful platform at a fraction of the cost of its competitors.</p>
<p>In a world where AI is often seen as a luxury reserved for large corporations, DeepSeek is breaking down barriers and proving that cutting-edge technology can be both affordable and effective. Whether youre a startup, a small business, or an individual developer, DeepSeek offers a cost-effective solution that doesnt compromise on quality or performance.</p>
]]></description><link>https://androidauthority.dev/deepseek-r1-chinese-challenge-to-openai-on-budget</link><guid isPermaLink="true">https://androidauthority.dev/deepseek-r1-chinese-challenge-to-openai-on-budget</guid><category><![CDATA[Deepseek]]></category><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item><item><title><![CDATA[Android XR - Is Google finally serious with virtual reality ?]]></title><description><![CDATA[<p>Google has dabbled with <a target="_blank" href="https://developer.android.com/develop/xr">VR</a> a many times through several approaches and has shut those products down. Google glasses, Google Cardboard and many more have been added to Google graveyard.</p>
<p>Android XR is an extension of the Android platform and ecosystem. The Android XR SDK is designed to let you build XR apps using familiar Android frameworks and tools or using open standards such as OpenXR and WebXR. All compatible mobile or large screen apps will be available to install on XR headsets from the Play Store. Review the <a target="_blank" href="https://developer.android.com/develop/xr/get-started#app-manifest">compatibility considerations to see if your app is compa</a>tible. [<a target="_blank" href="https://developer.android.com/develop/xr/get-started">link</a>]</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=a1Z12O5abgU">https://www.youtube.com/watch?v=a1Z12O5abgU</a></div>
<p> </p>
<p>Googles new focus on XR might be however something they would double down in the era of AI first Google.</p>
<p>The key to AI success is going to be data collection. Tesla for example has billions of hours of footage of people driving around. Meta aims to collect similar data from their Quest headset. Google has similar data from youtube and Photos. Adding AI glasses to the mix will give Google a better source of wide variety of real world data.</p>
<p>I think Google and Apple are going to push for a future that they collect not just the photos and videos but also critical information about the world in which their users operate. This data will then create a foundation for future of AI which is not just about text or videos but is another step towards real world intelligence.</p>
]]></description><link>https://androidauthority.dev/android-xr-is-google-finally-serious-with-virtual-reality</link><guid isPermaLink="true">https://androidauthority.dev/android-xr-is-google-finally-serious-with-virtual-reality</guid><dc:creator><![CDATA[Tanvi Nadkarni]]></dc:creator></item></channel></rss>